Social Media Patents & Privacy Data
[©2016. Published in GPSOLO, Vol. 37, No. 5, September/October 2020, by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder]
* Updated November 25 to include references to CPRA/ Prop24.
The episode “Nosedive” of the television series Black Mirror envisions a society built on social credit scores. In this dystopia, all social media networks have converged into one platform—think Facebook, TikTok, Yelp, and Equifax combined.
This umbrella social platform allows users to rate each other on a five-point scale after each social interaction. Those with a high score gain access to job opportunities, favorable zip codes, and even high-status relationships. Those with a low score have the social ladder kicked out from under them, leading to a downward cycle of estrangement—and in the case of Black Mirror’s protagonist, jail time.
While the society in “Nosedive” seems far-fetched, is the technology behind it plausible?
Facebook Patents That Impact Privacy
According to Facebook’s patents, the answer is a resounding “yes.”
In a series of filings spanning almost a decade, Facebook has obtained several patents that allow social media platforms to track, identify, and classify individuals in new and innovative ways. Below are just few.
Tracking individuals via dust. U.S. Patent No. 9485423B2, “associating cameras with users and objects in a social networking system” (filed September 16, 2010, patented June 25, 2013), allows social media networks to identify an individual’s friends and relationships by correlating users across the same camera. To do so, an algorithm analyzes the metadata of a photo to find a camera’s “signature.”
The signature may be as straightforward as the camera’s naming convention for files. For example, two different users may upload “vacation2012-01aBCDxeg.jpg” or “vacation2012-02aBCDxeg.jpg,” and the algorithm would flag the users as potentially related based on the similar file names.
Intriguingly, the patent also identifies patterns of dust marks and scratches on a camera lens as a “signature,” allowing the algorithm to correlate users based purely on a photo’s pixels. We can imagine this same technology being applied to correlate users across platforms, profiles, and even anonymous postings, just through the unique “fingerprint” of the smudges on their camera, even if the photographer chooses to stay off camera.
Extrapolating further, Facebook’s technology could also be used to identify individuals who submit anonymous footage or photos to the police or media, based on the treasure trove of images available on the social media network.
Applying for credit via social networks. U.S. Patent No. 9100400B2, “authorization and authentication based on an individual’s social network” (filed August 8, 2012, patented August 4, 2015), allows third-party service providers to “examine an individual’s social network against a known black list of individuals that have been determined to be untrustworthy.” Based on this examination, a service provider can choose to authorize or deny access to a platform or service.
Troublingly, the patent application contemplates the idea of a social credit rating based on the individual’s social network. As noted in the text:
[w]hen an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual through authorized nodes. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.U.S. Patent No. 9100400B2
The analysis of an individual’s creditworthiness based on his or her relationship with others can easily perpetuate race and wealth inequalities. Although the Equal Credit Opportunity Act prohibits discrimination on the basis of race, color, and other protected classifications, credit history and debt ratios can be considered in determining credit scores. If individuals are born to a lower socioeconomic class, they are more likely to associate with, or have “connections” with, individuals with more debt and lower credit scores. If communities have historically been denied credit or have been hesitant to establish credit due to historic bias, these shorter credit histories will translate to lower credit scores as well.
Turning Facebook “likes” into socioeconomic classes. U.S. Patent No. 10607154B2, “socioeconomic group classification based on user features” (filed July 27, 2016, patented March 31, 2020), presses this point even further, explicitly creating a system to determine a user’s socioeconomic status. This information is obtained by deriving models from global training data sets of “demographic information, device ownership, internet usage, household data, and socioeconomic status.” Then, these models are applied to information obtained about a user’s interaction on and off the social media platform through likes, follows, and interaction with paid advertising to output a probability score for each socioeconomic bracket (divided into working class, middle class, upper class, etc.).
Interestingly, this patent acknowledges that “[o]nline systems often do not have information about the income of users, for example, because the users are typically not inclined to share income information, which may be sensitive information, on online systems.” Thus, the invention bypasses individual reluctance to share income by inferring the same information from other sources.
Can Privacy-Intrusive Patents Do Good?
Facebook plays an outsized role in our political lives, and its patents reflect this role. To be fair to Facebook, its patents do not expose some dark, hidden agenda, but instead showcases the conflicting roles we want the company to play in our lives.
On one hand, we want Facebook and similar social media companies to protect our privacy. On the other hand, we want social media to show us relevant material, validate the truth of any content, and remove unlawful accounts and deepfakes (fake video or audio recordings that look and sound just like the real thing).
These goals are inconsistent with one another. Privacy protection requires data minimization and deidentification of data points across an organization. Content moderation, on the other hand, requires the exact opposite: automated data collection, validation, and creation of inferences to weed out inauthentic or irrelevant content.
Each of the patents above, while privacy-intrusive in many contexts, can also prove invaluable to content moderation. They may become necessary to root out deepfakes and authenticate “real” individuals and their posts. If these patents are limited to such purposes, they may end up protecting individual privacy in the long run.
As an example, if Facebook or TikTok know the relevant signature of a user’s camera, they can easily flag and take down images and videos purporting to be from the same “trusted” source but featuring a different “signature” filename or lens image. If a known deepfake account propagates numerous videos and images, social media networks can also search for the same “signature” across accounts to flag information from new fake accounts.
Similarly, if a user is only connected to known spam, terrorist, or robot accounts and has no connections to “trusted” or authenticated individuals, these inventions can help social media and marketing companies flag the user, preventing future spam or phishing attacks.
Even Facebook’s socioeconomic classification patent can be put to good use, in the right hands. We may want to validate the location of public services against socioeconomic data gathered from social networks, to avoid overlooking working-class communities. Nonprofits and other charitable organizations may also want to target their online outreach efforts to working-class communities. In addition, socioeconomic classification data can be used to test other algorithms (e.g., health care models) for economic bias.
What about Privacy Laws on Social Media?
Privacy laws, both domestic and international, also play a pivotal role. Facebook is under heavy scrutiny for its data handling practices from courts and regulators alike, making it difficult, if not impossible, for Facebook or a similar company to implement a nationwide social credit system in the near future.
In 2019 and 2020, Facebook suffered the following privacy setbacks:
- On July 24, 2019, Facebook stipulated to a $5 billion settlement order with the Federal Trade Commission (FTC) due to its privacy practices. Stipulated Order for Civil Penalty, Monetary Judgment, And Injunctive Relief, Case 1:19-cv-02184, Dkt. 2-1 (D.D.C. July 24, 2019). The FTC is currently investigating Facebook, along with other big tech companies Amazon, Apple, and Google, for antitrust violations.
- On July 16, 2020, in Data Protection Commissioner v. Facebook Ireland Limited, Maximillian Schrems (Case C-311/18) (“Schrems II”), the Court of Justice of the European Union invalidated all personal data transfers under the EU-U.S. Privacy Shield framework with immediate effect. This decision did not affect only Facebook, but thousands of U.S. companies that rely on this data-transfer mechanism for EU-U.S. personal data.
- On July 23, 2020, Facebook offered a $650 million settlement for In re Facebook Biometric Information Privacy Litigation, 3:15-cv-03747, U.S. District Court, Northern District of California (San Francisco), after the judge found a prior $550 million offer to be insufficient for class action claims under the Illinois Biometric Information Privacy Act (BIPA).
- On August 10, 2020, on the heels of this $650 million settlement, Facebook was sued again under the BIPA based on accusations that the social media giant violated BIPA through Instagram photos and videos. Whalen v. Facebook, Inc., 20-CIV-03346 (Cal. Sup. Ct., San Mateo County).
In addition, the California Consumer Privacy Act (CCPA) is now online, paving the way for the California Attorney General to fine companies for failing to delete and stop sharing user data upon request. The CCPA has already spurred Facebook to launch its “Limited Data Use” tracking features, which allows businesses to limit Facebook’s data collection and ad targeting for California residents.
A California ballot initiative for the November 2020 election would introduce a new data protection agency in the state, allow opt-out rights for cross-contextual advertising, and provide individuals with limited rights to object to automated processing and profiling of their “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” If this ballot initiative passes, which at the time this article was written seemed likely, Facebook and any other social media company would face an uphill battle to justify, catalogue, and provide opt-out rights to all of the data (and inferences) it collects on particular users. (*Update: the California Privacy Rights Act has passed, learn more about what businesses need to know about Prop24).
Furthermore, the current U.S. and EU antitrust investigations against big tech companies, including Facebook, will likely preclude the formation of an omnibus social media platform like the one found in “Nosedive.”
Even though the technology exists to create a Black Mirror society, such a society is not inevitable. Technology is agnostic to our intentions, and many of Facebook’s privacy-intrusive patents can be used for socially good purposes. Furthermore, the current regulatory landscape is trending further toward privacy protection, making the Black Mirror society more fiction than fact (at least for now). In the end, it is up to us to take science fiction’s warnings to heart and plan the best path for our inventions.
Lily Li (firstname.lastname@example.org) is a privacy and cybersecurity lawyer and owner of Metaverse Law. Her practice focuses on helping companies comply with their privacy, cybersecurity, and data breach notification obligations under the California Consumer Privacy Act, GDPR, HIPAA, and other domestic and international laws. She is a Certified Information Privacy Professional for the United States and Europe.