Could Google Be “Selling Personal Information” Under the CCPA?
Credit: Rajeshwar Bachu on Unsplash
Google is being taken to court by California residents in an ambitious class-action lawsuit.
The plaintiffs claim, among other things, that Google is selling people’s personal data via the real-time bidding (RTB) process, without offering an opt-out. The case alleges that this violates the California Consumer Privacy Act.
The docket is 620 pages long, and—well—I’m not getting paid to write this particular article, so I confess that I haven’t read the whole thing in detail. It’s a complicated case drawing on several different areas of law.
So, rather than getting into this specific lawsuit, what I’d like to do is concentrate on one key question: Could Google be “selling” personal information, in violation of the CCPA, via the real-time bidding (RTB) process?
What’s RTB?
For a great explanation of RTB, let’s turn to cybersecurity supergenius Dr. Lukas Olejnik, who offered me this explanation for an article I wrote last year:
“RTB involves three parties: the website, the RTB auction operator and the bidders. When the user is browsing a site (or launching a mobile app, for instance) that subscribes to RTB ads, the operator of the RTB system learns about this visit. They then launch an auction, sending information concerning the user to the bidders”
The “bidders” in this scenario are bidding on the chance to present an ad to the user.
Is this really “selling” personal information?
Under the CCPA, I believe this could qualify as “selling” personal information. Here’s why:
The CCPA has a broad definition of “personal information,” which includes “internet or other electronic network activity information” and “inferences drawn” from such data “to create a profile” about a person’s preference.
A “sale” is any disclosure of personal information “for monetary or other valuable consideration.”
What’s “valuable consideration”?
Here’s how California law defines “valuable consideration”, at Cal. Civ. Code § 1605:
This definition is very broad. Bidders don’t have to give Google money. Google just has to benefit from the RTB process to potentially bring the activity under the definition of “sale.”
Is it illegal to sell personal information under the CCPA?
No, but you must offer consumers an opt-out and respect their choice to opt out.
Hasn’t Google protected itself against CCPA claims?
Yes—or so it hopes.
When the CCPA took effect, Google created a new “restricted data processing” policy “to help advertisers, publishers and partners manage their compliance” with the CCPA.
What’s a restricted data processing policy?
Google’s policy is an attempt to bring itself under the CCPA’s “service provider” exemption. When you transfer personal information to a service provider, the transfer won’t qualify as a “sale” even if you benefit from it.
Likewise, Google stops using the information in such a way as to constitute a “sale", by instead sharing the information for “business purposes” as a “service provider”.
So Google just has to call itself a “service provider” and everything’s fine?
No. A “service provider” under the CCPA must fulfill certain characteristics—most importantly, it must operate under the instructions of its client business (in this case, “advertisers, publishers and partners”) via a written agreement.
Think of a “service provider” as being like a “data processor” under the GDPR. There are many differences between these two entities, but essentially, the business (the “data controller”, in GDPR terms) is in charge.
A service provider agreement must require the service provider to:
Only process the personal information it receives from a business for specific business purposes.
Not use, disclose, or retain the personal information for any purpose outside of the contract, unless otherwise permitted by the CCPA.
So what “business purposes” does Google use personal information for under the “restricted data processing” policy?
If a publisher has “restricted data processing” turned on for California consumers, Google only uses the data for conversion tracking and campaign measurement. These are valid business purposes under the CCPA.
So… case closed?
Not quite. It appears that it is the publisher’s responsibility to enable “restricted data processing.” Presumably, there are publishers who have not done so.
If there are indeed publishers who continue to send California consumers’ personal information to Google, without offering an opt-out, and Google continues to “sell” this personal information via the RTB process, this may constitute a “sale”.
Aren’t publishers liable for this?
Arguably, yes, but Google might also be liable.
If Google is “collecting” California consumers’ personal information, which can including “receiving” it, and “selling” it downstream, it doesn’t necessarily matter where it obtained the information from. Google would have to make those consumers aware of their “right to opt out” before selling their personal information.
So is Google going to have to pay up?
Again, I’m not going to get into the specifics of this particular lawsuit, but in general, there’s a huge issue with bringing this claim under the CCPA.
The CCPA’s “private right of action” only applies in the event of a data breach. And not just any data breach—a really specific type of data breach.
First, the data that has been breached must be “private information”, which consists of a person’s first name or initial and last name, PLUS another piece of data from a list of specific elements (SSN, etc). At least some of this data also has to be unencrypted.
Second, the data breach has involve all four of these interlinked elements:
Unauthorized access, AND
Exfiltration, theft, or disclosure, AS A RESULT OF
Failure to implement and maintain reasonable security procedures and practices to protect the personal information, THAT ARE
Appropriate to the nature of the information
This just doesn’t seem relevant to RTB at all. That’s not to say there has been no CCPA violation—but it might need to be enforced by the California Attorney General, rather than private litigants.
Just to reiterate: This is not a judgment about this specific case and I am not alleging that Google violates the CCPA.
Enjoy this post? There’s plenty more where that came from. I send a newsletter out once a week.
It’s nearly five years after the GDPR passed, and nearly three years since it came into force. While the upcoming ePrivacy Regulation will change the European privacy landscape, don’t expect the GDPR itself to change any time soon.
In a resolution this Thursday, the European Parliament said that the GDPR:
Has been “an overall success”
Has “become a global standard for the protection of personal data”
Has “placed the EU at the forefront of international discussions about data protection”
Does not require any “update or review”
However, there’s a big “but”…
Why did they say these lovely things?
The EU Parliament’s comments come in a resolution, which passed by 483 votes to 96, with 108 abstentions, on the Commission’s GDPR resolution. In short, the Commission said that the GDPR did not require amendment, and the Parliament agreed.
What’s the “but”?
Despite lavishing praise on the regulation itself, most of the comments in the Parliament’s resolution are quite negative.
The main problem, as the European Parliament sees it, is a lack of enforcement. The Parliament was pretty scathing about Ireland’s and Luxembourg’s DPAs:
“…these DPAs are responsible for handling a large number of cases, since many tech companies have registered their EU headquarters in Ireland or Luxembourg… the Irish data protection authority generally closes most cases with a settlement instead of a sanction… cases referred to Ireland in 2018 have not even reached the stage of a draft decision…”
Ouch…
Ouch indeed. The Parliament also lists some ways in which a lack of enforcement of the GDPR is leading to poor outcomes for EU data subjects.
For example, this passage about the ubiquity of targeted advertising and algorithms:
“…profiling, although only allowed by Article 22 GDPR under strict and narrow conditions, is increasingly used as the online activities of individuals allow for deep insights into their psychology and private life…”
And this passage, which cites digital “monopoly situations”:
“…further efforts are needed to address broader issues of digitisation, such as monopoly situations and power imbalances…”
The Parliament also criticised the Commission’s method of adopting adequacy decisions:
“…adequacy decisions should not be political but legal decisions… so far adequacy decisions have only been adopted for nine countries, even though many additional third countries have recently adopted new data protection laws with similar rules and principles as the GDPR.”
Are they sure they think the GDPR has been an “overall success"?
The issues raised by the Parliament are all about enforcement. The resolution doesn’t really criticise any provisions in the GDPR—at least not to the extent that they would require amendment.
However, some of the principal issues creating a lack of enforcement are the GDPR’s one-stop-shop and consistency mechanisms. The one-stop-shop mechanism leads to most complaints about firms like Facebook, Google, and Amazon being forwarded to Ireland and Luxembourg—and then going no further.
It is notable that both the Commission and the Parliament stopped short of actually recommending that the EU amends these seemingly dysfunctional mechanisms.
Washington Privacy Act Defeats People’s Privacy Act—But Will It Pass?
I know… Another piece about emerging U.S. state privacy laws—but I promise this one is particularly interesting.
On Friday, Washington lawmakers voted on two highly significant privacy laws: the Washington Privacy Act (WPA) and the People’s Privacy Act (PPA).
The WPA is a relatively strong privacy law by U.S. standards, but critics say it’s not strong enough. The PPA was proposed as a more powerful alternative bill.
First, we’ll take a look at the WPA.
Hasn’t Washington been trying to pass this law for a while?
Yes, Washington has been trying to pass state privacy legislation for a few years now.
Pollyanna Sanderson, policy counsel at Future of Privacy Forum, has been watching the WPA’s progress through the state’s legislature very closely. I asked her about the history of the bill.
“This is the third time that the Washington Privacy Act has been introduced in Washington State,” Sanderson told me via email. “Each year, the legislation has become more sophisticated, and has made it further through the legislative body.”
“Last session, the legislation almost passed. After passing out of the Senate, it narrowly failed to pass in the House.”
“At the time, the Attorney General stated that the legislation was ‘unenforceable,’” Sanderson said. “Since then, legislators have worked with the Attorney General to improve enforceability.”
What’s so interesting about this version of the WPA?
Sanderson said the most impressive elements of the proposed law were its “risk assessments, purpose and retention limitations, sophisticated research provisions,” plus the obligation for businesses to obtain opt-in consent before processing sensitive data.
“Moreover,” she said, “the consent language is incredibly strong—prohibiting deceptive user interfaces known as ‘dark patterns.’”
In its current state, the law would include a limited private right of action allowing injunctive relief (i.e., no money) if businesses violate certain provisions. These provisions include the WPA’s consumer rights, anti-discrimination rules, and opt-in consent requirements.
How does the WPA compare to other emerging U.S. state privacy laws?
“The WPA offers a more attractive model for US privacy legislation than the CPRA,” Sanderson said. “Its framework is more sophisticated, risk-based, and is more comparable to the GDPR.”
Sanderson noted that California’s CPRA “contains an opt-out—the right to limit the use and sharing of sensitive information.” But the WPA’s range of opt-outs is broader, allowing consumers the chance to refuse “sales, risky profiling, and targeted advertising.”
“This provision would overcome a loophole contained in the California law which has enabled targeted advertisers to continue business as usual,” she said.
What about the People’s Privacy Act?
The People’s Privacy Act (PPA) was a stronger proposal for Washington’s privacy law, sponsored by Representative Shelley Kloba and supported by the American Civil Liberties Union (ACLU), among others.
Here’s a tweet from the ACLU’s Jennifer Lee summarising how support is split across the two bills:
You get the idea—the WPA has been drafted with the involvement of industry lobbyists, whereas the PPA is supposedly more of a “grassroots” bill, supported by nonprofits.
Unfortunately for the ACLU and others, the PPA failed to advance this Friday.
So is the PPA better than the WPA?
PPA proponents have been harshly critical of the WPA. Most of this criticism focuses on the WPA’s mostly “opt out” consent provisions, its limited right of action, and the fact that it doesn’t apply to employees or students.
The WPA also offers businesses the opportunity to “cure” violations before being punished for them. The PPA would have only allowed this for the first year.
Do the PPA’s proponents have a point?
I’ve always been slightly taken aback by the strength of feeling from PPA supporters. Looking at the situation from across the Atlantic, the WPA seems like one of the strongest privacy bills with a chance of passing in the U.S.
But perhaps I’m being too pragmatic. The ACLU does some amazing work on privacy, and it’s their prerogative to always push for stronger protections.
So the PPA failed to advance and the WPA advanced… but will the WPA pass?
When I first spoke to Future for Privacy Forum’s Sanderson about the WPA back in January, she said she was “cautiously optimistic” that it would pass. This week, her outlook had changed.
“I am not sure about whether WPA will pass,” she said. “In fact, I am a little pessimistic.”
So perhaps it’ll be “fourth time lucky” for the WPA next year.
A German business used MailChimp to send marketing emails. One of the recipients noticed that MailChimp might not be GDPR compliant, and made a complaint to the Bavarian DPA (BayLDA).
What was the alleged problem with MailChimp?
MailChimp is a U.S. company that uses standard contractual clauses (SCCs) to facilitate transfers of personal data from EU-based controllers to the U.S.
Since the Schrems II judgment invalidated the EU-U.S. “Privacy Shield” framework, SCCs are basically the only option for most companies wishing to transfer EU personal data to the U.S.
The complainant alleged that their personal data was not properly safeguarded by the arrangement between the German business and MailChimp.
Even though the transfer to MailChimp used SCCs?
That’s right. While Schrems II found that SCCs—unlike Privacy Shield—are still valid, that doesn’t mean they will always be enough to safeguard personal data subject to a third-country transfer.
As noted by the EDPB, controllers are expected to assess third-country transfers on a case-by-case basis to ensure SCCs are sufficient. If not, it might be necessary to take supplementary measures to protect personal data against interference.
So SCCs weren’t good enough in this case?
No. Particularly because MailChimp might be an “electronic communication service provider” under U.S. law.
What’s an electronic communication service provider?
Under U.S. law—in particular, a surveillance law known as FISA 702—an electronic communication service providers include cloud service providers, ISPs, and email providers
These companies are particularly vulnerable to interference from the U.S. government. This means they might be forced to allow government access to EU data subjects’ personal data.
Without supplementary measures designed to mitigate this possibility (NB:it’s not clear that such measures actually exist), the DPA said that the agreement between the German business and MailChimp was unlawful under Article 44 of the GDPR.
The controller had not properly assessed the risk, and therefore had not taken any supplementary measures to safeguard the personal data.
What happens now?
There has been relatively little coverage of this case, but it seems that MailChimp might no longer be a viable data processor for EU controllers.
This is one of the more significant implementations of the Schrems II judgment, along with the Doctolib decision, which I covered last week. It seems obvious that we’ll see more and more cases like this.
However, the U.S and EU announced on Thursday that they would be “intensifying” Privacy Shield negotiations. There appears to be a clear willingness to engage on this issue.
But it remains to be seen whether the U.S. will be willing to budge on its surveillance legislation—or whether the European Commission will be able to find a solution that will stick (unlike the last two).
Is Facebook ‘Wiretapping’ Via Its ‘Like’ Button?
Facebook is facing a gigantic lawsuit accusing the company of violating a 1968 wiretapping law. Wiretapping sounds a little clandestine even for Facebook… What’s the deal?
Facebook is facing a $15 billion class-action in the U.S. from people who allege it illegally tracked its users outside of Facebook between April 2010 and September 2011.
Facebook asked the Supreme Court to dismiss the case, but this week, the Supreme Court refused.
What’s the case about?
The central claim is that Facebook used its “Like” button to track its users outside of Facebook, even when they’re not logged in. The plaintiffs say this violates a U.S. law known as the Wiretap Act.
Wiretapping? That doesn’t sound relevant to Facebook’s “Like” button
The Wiretap Act was first enacted in 1968 and recast in 1986 to include reference to “electronic communications”. It’s actually a little bit like the EU’s ePrivacy Directive.
The Act prohibits the interception of communications by someone who is not party to the communication, i.e., someone who is eavesdropping.
Again, what does this have to do with the “Like” button?
Facebook’s Like button is all over the web and allows users to “like” content on external websites.
But the Like button, in combination with Facebook’s cookies, also collects data about people—even if they don’t use the button, and even if they aren’t logged into Facebook.
Facebook used this data to—you guessed it—target ads (Facebook says it no longer uses the data of logged-out users for this purpose).
The company even tracked non-Facebook users in some cases—but said it used this data is not used for advertising purposes.
Has anyone tried to stop Facebook from doing this before?
Yes, lots of times. In fact, this Wiretap Act case came before a U.S. federal judge in 2017, but it was dismissed because the plaintiffs failed to demonstrate that they had a “reasonable expectation of privacy” or that they suffered an economic loss.
The Like button is more controversial than many people realise, having triggered complaints in Canada, Germany, and Belgium, among other places.
Did any of those complaints work?
No. The Belgian DPA initially imposed a fine on Facebook, but the company appealed and won.
The Belgian DPA was found not to have jurisdiction over Facebook, whose EU headquarters is in Ireland. And, if as we know, Ireland is the place Facebook investigations go to die.
Will Facebook win this time?
This is a highly significant case, but it’s very hard to say whether the plaintiffs will succeed.
In defence of the Wiretap Act claims, Facebook says that it is not “eavesdropping” on the communication between visitors and the non-Facebook websites that they visit because it is party to the communication.
On the face of it, this argument seems difficult to sustain given that the user might not even be aware of the presence of a Like button on its website—however, there are some interesting legal precedents in this area.
Either way, the Supreme Court’s decision to allow the case to proceed will offer greater clarity on the Wiretap Act and its relationship with online advertising. The law might transpire to be a suitable stopgap until the U.S. develops a federal privacy law (if ever).
If you want to go deeper into these issues, I strongly recommend this Lawfare blog by Erik Manukyan, written after the Ninth Circuit Court revived this case last year.
“Here’s a disturbing thought for those of us who are critics of the tech industry: are we unduly credulous about the capabilities of the technology as extolled by the companies and their paid evangelists? Did clever exploitation of social media really lead to the election of Trump and the Brexit vote in 2016, for example?
At one level, the answer to that has to be ‘no’…”
This short piece for The Guardian suggests that targeted advertising—the industry that is powering a huge section of the economy and has amassed exabytes of data about billions of people—is all hype.
Lots of people have been saying this for a long time, of course—and Naughton’s article provides a good jumping-off point if you find this argument intriguing.
“On the one hand, the legislation has been a clear win for Brussels: By setting data protection rules and levelling the global playing field, the European Union can claim to be a rulemaker rather than a ruletaker when it comes to protecting private information online.
On the other, its implementation has been a huge headache for the average business, organization and citizen. But most importantly, the GDPR is seriously hampering the EU’s capacity to develop new technology and desperately needed digital solutions, for instance in the realm of e-governance and health.”
Did you find yourself shaking your head in disagreement while reading the European Parliament’s unrelenting praise of the GDPR?
If so, you should get to know Axel Voss, an insider-critic of the GDPR. He has many ideas about how the EU should be amending the regulation. (Note: Voss isn’t the “father of the GDPR”, as many outlets wrongly characterised him this month.)
I’m not necessarily endorsing this article, but I do think it’s important to understand this perspective. Outside of my Twitter and LinkedIn bubbles, there are many people who like the GDPR less than I do.
Mark Zuckerberg once said that the Children's Online Privacy Protection Act (COPPA) was a "fight" Facebook would "take on at some point."
This week, we learned Facebook is planning an Instagram for under-13s. If this is the fight, Facebook will probably win.
Why? Why would Facebook do this?
Instagram is currently unavailable for under-13s because they have special legal protections, meaning that it’s harder for businesses to collect their data and to target them with ads.
In the U.S., the main children’s privacy law is COPPA, a federal law passed in 2000—before Facebook, YouTube, or Instagram existed.
Since 2013, COPPA has required websites to get parental consent before tracking under-13s with cookies.
Does COPPA have teeth?
Some COPPA settlements seem big at first—like the Federal Trade Commission’s $170 million settlement with Google in 2019. But bear in mind that Google's turnover was $160 billion that year.
Google allegedly tracked users under 13 on YouTube. After the settlement, YouTube basically shifted COPPA liability to content creators (among other measures)
TikTok (previously Music.ly) also settled for $5.7 million with the FTC under COPPA in 2019.
Whether carelessly or willingly, apps and sites are repeatedly allowing kids to sign up without getting parental consent.
What about outside the U.S.?
Outside the U.S., things could be even more complicated for Facebook’s new venture.
I covered two ongoing U.K. child privacy cases last year alleging platforms had violated the GDPR's child privacy rules.
The first was against YouTube, aiming for an ambitious $2.5 billion in damages. This had to do with how YouTube processes kids’ data to make recommendations. The second, which is at a very early stage, was against TikTok, led by a 12-year-old girl.
Recently, we’ve seen some EU DPAs coming down hard on social media apps under the GDPR’s child privacy rules. See Italy’s recent action against TikTok, for example.
How will Facebook avoid problems like this?
Setting up a separate platform for youngsters might be a way for Facebook to avoid claims under child privacy law.
For example, Tiny Instagram could require parental consent at sign-up. Or it might avoid collecting certain types of data. Or it could—imagine this—avoid targeting ads based on users’ personal information.
Or, Facebook could just take the risk, pay any resulting fines, and work out some way of complying with enforcement notices without failing to profit.
Cases like those explored above could be a headache for Facebook. But I think it'll be fine—it always is.
Doctolib Ruling: Does Schrems II Now Apply to Inter-EU Transfers?
French court ruling says vaccine-booking platform’s contract with Amazon is lawful. But this case isn’t as clear-cut as it seems.
I’m drawing my analysis here from the IAPP’s summary of the case.
Here’s the background
The French Conseil d’Etat looked at a data processing agreement between Doctolib, whose platform is used for booking vaccinations, and AWS Sarl, a Luxembourg-based subsidiary of Amazon Web Services.
Doctolib used AWS to process health data. The claimants asked the court to suspend transfers of personal data between Doctolib and AWS.
Who cares?
The case was significant, in part, because so many EU data controllers use AWS—or another U.S. subsidiary—as a data processor. The case could have invalidated a lot of data processing agreements and caused a lot of companies serious issues.
Why would the agreement with AWS have been a problem?
It all comes back to Schrems II. Because AWS Sarl is a subsidiary of Amazon, the claimants argued that the personal data in its care was at risk of interception under U.S. laws.
Even though AWS Sarl is storing the data in the EU?
Yes—U.S. surveillance laws like FISA 702 and EO 12333 affect certain U.S. companies even when operating overseas. This means that AWS Sarl might be obliged to submit personal data to U.S. intelligence services.
So Schrems II applies despite there being no third-country transfer of personal data?
In a sense, yes. Controllers always need to take appropriate safeguards when disclosing personal data to a processor—and the risk of access by intelligence services is a relevant consideration.
The Conseil d’Etat examined the data processing agreement between Doctolib and AWS Sarl to determine whether there were sufficient safeguards to protect the personal data—just as the CJEU examined transfers the safeguards provided by Privacy Shield in Schrems II.
What did the court decide?
The Conseil d’Etat said that the data processing agreement was valid and that it would not suspend transfers between Doctolib and AWS Sarl.
Phew! So all processing agreements with AWS Sarl are safe?
No. This case has been reported in these terms, but in my view, this isn’t the right takeaway.
The court found that the transfers to AWS Sarl were valid because of the safeguards Doctolib and AWS Sarl had put in place. These included:
AWS was contractually bound to challenge access requests by foreign authorities (NB: I’m not sure such challenges would much difference).
The data was encrypted and the key was held by a “trusted third-party” in France.
There was a relatively short retention period of three months.
There are a couple of points here that might be open to challenge. The court also found that the data about vaccinations was not “health data” under the GDPR. I was surprised by this part.
Controllers should consider whether their processors and subprocessors are subject to interception under surveillance law—regardless of where their servers are actually based.
The point of the international transfer provisions is to safeguard personal data, and it’s worth thinking about any transfer or processing agreements—third-country or otherwise—in these fundamental terms.
(This, of course, is not legal advice.)
Could the US Lose Its Best Privacy Law?
Illinois lawmakers are trying to undermine the Biometric Information Processing Act (BIPA). This is one of the few U.S. privacy laws providing Americans with real privacy protection.
Maybe it’s not objectively the best, but my favourite U.S. privacy law is Illinois’ Biometric Information Processing Act (BIPA), which passed way back in 2008. BIPA is one of the most powerful—albeit limited—privacy laws in the U.S.
BIPA requires businesses to provide notice and obtain consent before collecting biometric information from consumers, including facial recognition data, fingerprints, and voiceprints.
Sounds reasonable?
Not according to a series of Illinois bills that have attempted to weaken the law, apparently under the guise of helping “small businesses”.
Various bills proposed by Illinois legislators have attempted to remove BIPA’s private right of action, narrow its scope, change its definition of “biometric information”, or chip away at its consent requirements.
Why do you like BIPA so much, anyway?
BIPA has resulted in some high-profile cases and settlements, not least the $650 million Facebook class-action settlement from earlier this month.
Clearview’s business model involves hoovering up social media photos (including, mostly likely, yours) without notice or consent, extracting unique biometric data about the subjects’ faces, sorting them into a searchable database, and selling access to that database to police and, until recently, private companies.
It's just one state law.If it gets repealed, what's the big deal?
The stitching-together of America’s patchwork of privacy laws has been one of the big stories of 2021 so far. But the country still lacks meaningful, rights-based privacy protection for much of its population.
Other than BIPA, there is no effective U.S. law prohibiting and companies like Facebook and Clearview from gathering biometric information without consent (although that may change soon).
This will change when Virginia’s Consumer Data Protection Act (CPDA) comes into force. However, this law has no private right of action, so there will be less incentive for businesses to comply with it.
Repealing or amending BIPA would be a huge step backward for U.S. privacy law, at a time when it seems to be moving forward faster than ever.
Under the Data Protection Directive, Facebook relied on the legal basis of "consent" for cookies.
The GDPR passed in 2016, with a higher consent standard. Consent now had to be obtained via an “unambiguous,” “clear, affirmative action.”
Facebook's consent request was no longer valid. What would the company do? Ask for consent in a valid way? Stop undertaking activities that require consent (such as using tracking cookies)?
No—on the day the GDPR came into force, in May 2018, Facebook copied its consent request into its terms. The social media platform now said it was now relying on the legal basis of "contract”, not consent.
What are the requirements for relying on the legal basis of “contract”?
The lawful basis of ”contract” is for when you need to process personal data to perform your obligations under a contract with the data subject.
If you order a product from Amazon, Amazon needs your address to send it to you—and Amazon can rely on “contract” to collect and use your address for this purpose.
Facebook said it "needed" cookies to enable its business model to operate. After all, the social media giant can’t fulfill its obligations under the Facebook Terms of Service if it goes out of business—right?
Facebook also needed cookies to provide personalised ads (as “promised” in Facebook’s terms), and to enable the user to use Facebook for free.
So… Is that a valid reliance on “contract”?
Not according to the European Data Protection Board (EDPB).
The EDPB says activities that are "necessary for the performance of a contract" do not include "activities (that) are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model."
I covered this case after it was heard by the Viennese Superior Court. The court’s decision seemed odd to many people—like me—who spend a lot of time submerged in data protection.
There were other factors at play here—local contract law, for one. But it seems likely that the Austrian Supreme Court will refer the case to the Court of Justice of the European Union (CJEU), which will confirm Schrems’ arguments.
So what if the CJEU says Facebook must get “proper” consent? What will Facebook do?
Here’s one possibility.
The ePrivacy Regulation will come into force soon. Under the current version, controllers will be allowed to make access to services contingent on consent to cookies—as long as they offer an alternative service that doesn’t involve cookies, for which they can charge a fee.
In my view, this provision creates two tiers of consent in the EU (which I think is problematic).
Facebook is unlikely to be happy about the idea of offering users a genuinely free choice over its use of cookies.
But what if Facebook could offer a paid alternative, safe in the knowledge that most people would continue to use the free version?
A paid version of Facebook? Unlikely. But it’s a possibility.
Irish DPA Criticised (Again) Over GDPR Enforcement
The Irish DPA has been criticised by the German federal data regulator. Fair enough?
A letter from federal data protection regulator (BfDI) Ulrich Kelber has been reported by the Irish Times criticising the Irish Data Protection Commission (DPC).
This letter reiterated what many observers have been saying about the Irish DPC for some time. As home to most big tech companies, Ireland has earned a reputation as a GDPR-compliance haven.
Is the Irish DPC’s reputation fair?
Think of it this way. As lead supervisory authority to Facebook and Google, the DPC’s job is to ensure these companies comply with data protection law.
Under the one-stop-shop procedure, DPAs have to forward complaints about these firms to the DPC except in certain specific circumstances.
Despite this, Ireland has never concluded an investigation Google, Facebook, or any of their subsidiaries. But other DPAs have—even within the very narrow set of circumstances under which they have been permitted to do so.
Here's a list of every EU DPA that has fined each firm since 2018.
Google:
• France (under both the GDPR and the ePrivacy Directive) • Belgium • Sweden • Hungary • (Not Ireland)
Facebook:
• Hamburg • UK (Data Protection Directive) • France (ePrivacy Directive) • (Not Ireland)
So what enforcement action has the Irish DPA taken against big tech firms?
Just one penalty, against Twitter for €450,000, after it failed to properly notify the DPC of a data breach.
This isn't a large fine—around 0.1% of Twitter's turnover. But the Irish DPA originally proposed an even smaller penalty, of between €135,000 and €275,000.
This small penalty was seen as too lenient by other EU DPAs. They disputed it under the first-ever use of the GDPR’s Article 65 procedure. Several DPAs recommended multi-million euro fines.
On the other hand, Ireland is reportedly due to impose a €50 million on WhatsApp later this year (but this hasn’t been officially confirmed yet).
This is a complex issue, and the GDPR isn't all about fines. There’s also some question as to whether the German regulator was right to criticize the DPC in this way. But it does reiterate this bottleneck of GDPR enforcement.
I’m not too proud to plug one of my own articles this week, about Facebook’s SEER AI and the implications for privacy. I spoke to some great sources who shared some excellent insights.
You’ve probably heard about Google’s plans to overhaul online advertising. Briefly, the plans include:
Removing third-party cookies from Chrome
Bucketing users into “cohorts” based on online behaviours (rather than individual targeting)
Building protections against device fingerprinting
On the face of it, this sounds reasonable. But as Google reveals about its proposals, tech-watchers are increasingly cynical.
Concerns centre around both antitrust and privacy.
On the antitrust front, the U.K.’s Competition and Markets Authority (CMA) has been investigating Google’s plans since January, over concerns that Google’s plans would consolidate a dominant market position.
For my article about the CMA’s Google investigation, I spoke to Michelle Meagher, author of Competition Is Killing Us, who argues Google is using privacy as an “excuse” for monopolistic behaviour:
“What we’re seeing is Google attempting to use privacy as a shield or an excuse for consolidating its stranglehold over online advertising."
“Google’s vision is for our privacy to be entirely protected — by them.”
The CMA’s investigation was triggered by the group Marketers for an Open Web, who claim the proposals would “effectively create a Google-owned walled garden that would close down the competitive, vibrant open web.”
Google’s sandbox is little more than an attempt at using privacy as a pretext to solidify its dominance. It creates anticompetitive rules for everyone to abide by, except for Google. Third parties—some people call them competitors—will be in the dark, but first parties—that would be Google—will have a 20/20 view into every consumer’s likes, desires, and location, to sell ads.
Side note: While Glueck provides an excellent overview of the concerns around Google’s Federated Learning of Cohorts (FLoCs), it’s important to note that Oracle derives significant value from cookies. I interviewed Rebecca Rumbul last year, who is bringing a case against Oracle over its alleged misuse of cookies.
Antitrust aside, there’s also considerable concern about whether Google’s plans are, in fact, good for privacy.
Google’s new ad system will group people who share similar advertising targeting characteristics into “interest groups”. But it has not yet defined the minimum threshold (“k-anonymity threshold”) for the size of an interest group and the degree of uniqueness of characteristics of people within it.
Cohort size matters: smaller cohorts increase the likelihood that individuals can be identified.
In The Privacy Mirage, Eric Benjamin Seufert argued that Google isn’t really improving privacy at all—just redefining the concept so as to align with its business practices:
…by artificially defining “privacy” as the distinction between first- and third-party data usage, the largest platforms simply entrench their market positions… In this way, “privacy” is a mirage: the largest platforms define privacy such that it is always just one big, sweeping change away from being achieved.
Third-party cookies are bad for privacy. But is Google’s increased dominance worse? Perhaps the answer lies in restricting or amending Google’s proposals to ensure better competition (as interoperability research Ian Brown suggested to me).
The coming months will see Google release more information regarding its changes to online advertising. The proposals will require close scrutiny.
EU Institutions At Odds Over Privacy
The European Data Protection Board stands strong on privacy, while Commission, Parliament, and the Council seek to undermine the confidentiality of communications.
The European Data Protection Board (EDPB) held its 46th plenary session this week. The EDPB Chair, Andrea Jelinek said:
“The ePrivacy Regulation must not —under no circumstances [sic]—lower the level of protection offered by the current ePrivacy Directive, and should complement the GDPR by providing additional strong guarantees for confidentiality and protection of all types of electronic communication.”
There's a very different mood in the European Parliament, which recently voted overwhelmingly in favour the Commission’s controversial proposed ePrivacy derogation:
The “chat control” law (as the European Pirate Party is calling it) is a proposed temporary derogation to the ePrivacy Directive that would oblige email and messaging platforms to scan all communications and report suspected child sexual abuse material to law enforcement authorities.
This legislation should be taken together with the Council’s proposed ePrivacy Regulation, published in February, which would permit the processing of (pseudonymised) communications metadata for law enforcement purposes.
In a statement on Tuesday, the EPDB showed clear disapproval of any watering down of rights in the ePrivacy Regulation:
Any possible attempt to weaken encryption, even for purposes such as national security would completely devoid those protection mechanisms due to their possible unlawful use. Encryption must remain standardized, strong and efficient
Writing for Lawfare, Theodore Christakis and Kenneth Propp explained how France has been pushing for broad national security exemptions in the ePrivacy Regulation, allegedly to avoid complying with the October Court of Justice of the European Union judgements in Privacy International and LQDN.
Are these proposals compatible with the EU Charter of Fundamental Rights? If not, we could see the EU’s lawmaking institutions pitted against the CJEU, with the EDPB shaking its fist in the background.
T-Mobile to Sell US Customers’ Web Usage Data on Opt-Out Basis
Telecoms giant will start selling information about how customers use the web and what apps are on their phones unless they opt out.
This piece argued that the U.S. should be enacting a privacy law with an "opt-in" model of consent. It suggests that Virginia and California have missed this opportunity with recent legislation.
This same week, T-Mobile announced that it will start selling U.S. customers’ data on an opt-out basis, starting on 26 April.
The NYT's editorial has been getting a lot of pushback from U.S. readers — much of which is well-founded, arguing for a more principles-led, “data fiduciary” role for businesses.
For my part, I think opt-in consent still plays a role in data protection, and I think criticizing states for choosing “opt-out” models is valid. On Monday, I defended the concept of opt-in consent in a LinkedIn article, In Defence of the NYT's Opt-In Consent Editorial.
The T-Mobile case is an example of where opt-in consent would have worked well.
The company will use information including “web and device usage data” (which includes information about the apps installed on people's phones) to target first and third-party ads.
Customers can opt out, but some say the process is difficult. Here’s Harvard researcher Elettra Bietti describing her attempt:
Bietti, incidentally, makes an excellent case for moving beyond a “notice and consent” model of privacy law.
In the U.S., ISPs are allowed to sell their customers’ browsing history (thanks in part to the Trump administration, which killed the FCC’s broadband rules in 2017).
Short of banning this practice altogether, states passing privacy laws can help protect their citizens’ privacy by prohibiting the sale of all personal information without opt-in consent.
In fact, Virginia’s new state privacy law does prohibit the sharing or sale of “sensitive personal information” without opt-in consent. Sensitive personal information includes people’s “precise geolocation.”
T-Mobile’s new policy, as it happens, doesn’t apply to “precise location data”:
A couple of things that are not changing — We do not use or share… precise location data for advertising unless you give us your express permission…
Perhaps if opt-in consent applied to the sale of other types of data — even in just a few states — T-Mobile customers’ web usage data might have been spared.
Apple Facing Another Investigation in Europe
Big tech’s most privacy-focused company is facing three data protection complaints and one antitrust investigation
Privacy is a major part of Apple's brand. So you might be surprised to learn that the company is dealing with multiple investigations by EU data protection authoritiesover allegations that it is breaching privacy law.
Most recently, Apple was referred to France's CNIL over allegations that ad personalisation is turned "on" by default in iOS 14. The group behind the complaint, France Digitale, claims that this violates the ePrivacy Directive and the GDPR.
Back in December, I wrote about a similar complaint filed with data protection authorities in Berlin and Spain by privacy group Noyb.
Noyb says Apple shouldn't be installing its ID for Advertisers (IDFA) on millions of iPhones without consent, arguing that it violates Article 5(3) of the ePrivacy Directive.
Here’s what Bennett Cyphers of the Electronic Frontiers Foundation (EFF) told me about Apple’s IDFA:
“IDFA is a dangerous, privacy-intrusive tool that goes against Apple’s stated concerns about user privacy. It is designed to help advertisers and tracking companies at users’ expense.”
Apple’s changes to iOS mean that users will be asked to consent to being tracked via the IDFA. But the installation of the tracker, Noyb argues, also requires consent.
So that's three ongoing EU privacy investigations into Apple — plus an antitrust investigation in the UK (which I covered in last week’s edition).
It's fair to say that Apple does a better job of preserving people's privacy than certain competitors. But the EU's data protection authorities will need to determine whether the company is acting within EU privacy law.
UK Reiterates Intention to Diverge from EU Data Protection Standards
The U.K.’s culture secretary has repeated his ambiguous claims about the future data protection regime.
The “unashamedly pro-tech” minister has made similar comments in the past, including in a Financial Times op-ed earlier this month, but has been relatively coy about providing solid details.
In one sense, the U.K. can do whatever it likes with its data protection law, now it isn't part of the EU.
But the government can’t move too far away from EU standards without putting the U.K.’s data protection “adequacy decision,” drafted by the European Commission last month, at risk.
Failing to achieve or maintain adequacy would mean more red tape for businesses and, arguably, make British firms less attractive prospective business partners.
The government keeps signaling its intention to liberalise data protection law. In February, Prime Minister Boris Johnson said the U.K. would develop a “separate and independent” data protection policy from the EU.
So what do we know about the U.K.’s plans? Not a lot.
The U.K.-Japan trade deal, concluded in October last year, contained clauses suggesting that the U.K. could be planning to operate two models of data protection—an EU version and a more liberal Asia-Pacific version—according to an article by Javier Ruiz for Open Rights Group.
The U.K.’s continuing disregard for EU principles in its surveillance laws continues to take the country further away from adequacy. While this matter didn’t preclude a draft adequacy decision, it might conceivably cause any final decision to be overturned by the CJEU somewhere down the track.
Then there’s the appointment of the next Information Commissioner, who heads the U.K.’s data protection authority. While Liz Denham’s replacement hasn’t yet been announced, the government has clearly signaled that it hopes to appoint someone who will prioritise innovation (which will most likely come at the expense of enforcement).
There is some for the risk in the U.K. loudly declaring its intention to depart from EU standards when the draft adequacy decision contains a four-year review period
But adequacy means “essential equivalence”—not absolute equivalence. So how much room for manoeuvre do adequacy decision recipients have?
Looking at the list of “adequate” countries, many have data protection regimes that are much less strict than the U.K.’s, including Canada, Israel, and New Zealand. But, as Douwe Korff and Ian Brown point out: these are older decisions that require review by the Commission.
David Erdos argues that some wriggle room is possible, particularly if the U.K. commits to the continued recognition of the Council of Europe’s Convention 108 and complies with the standard of “essential equivalence.”
Sacrificing the U.K.’s adequacy decision in the name of economic stimulus might be unwise. The UCL European Institute estimates that implementing alternative safeguards for data transfers could cost businesses up to £1.6 billion in compliance costs alone.
This excellent long-read from Karen Hao describes her observations about Facebook’s AI program, which allegedly has failed to devote sufficient resources to prevent the logarithmic promotion of disinformation.
Hao’s piece has caused a significant backlash from Facebook, which Hao and her editorial team have made public on Twitter.
The internet was not designed for children, but we know the benefits of children going online. We have protections and rules for kids in the offline world – but they haven’t been translated to the online world.
Brown’s excellent blog post demonstrates beyond any doubt that this is a fallacy.
UK's GDPR "Immigration Exemption" | Facebook's "SEER" AI fed Instagram photos | Apple UK Antitrust Investigation | Facebook Questioned in South Africa | Virginia's CDPA vs California's CPRA | NYT |
UK Government Denies Thousands of Migrants Access to Personal Data
The UK’s Home Office used the “immigration exemption” to deny over 14,000 subject access requests in 2020
The UK's Home Office used the "immigration exemption" to deny people access to their personal data over 14,000 times in 2020 — to deny over 72% of subject access requests.
The subject access request is the backbone of data protection law. It lets you see who is holding your data and what they're doing with it.
In the UK, migrants and their lawyers routinely use subject access requests to view data held by the Home Office. This data can form the basis of life-or-death deportation orders — it's crucial that it's correct.
When the government passed the GDPR into UK law, via the Data Protection Act 2018 it included a get-out clause. Any subject access request could be denied if complying with it would "prejudice" the "maintenance of effective immigration control".
The government insisted this exemption would be in a proportionate way and in "relatively limited circumstances".
We now know it's used nearly to deny nearly three-quarters of subject access requests to the Home Office.
How could each of these 14,027 subject access requests possibly have presented a "prejudice" to the "maintenance of effective immigration control"? What does this phrase even mean?
The government has never justified its use of this overly broad exemption.
For an upcoming article, I spoke to representatives from two UK charities — Open Rights Group and the3million — who are taking the government to court over the immigration exemption.
They told me how much pain this provision is causing migrants dealing with residency and deportation issues.
There’s still a chance that their case against the government can overturn this bad law on human rights grounds.
Facebook Training Its AI Using a Billion Non-Europeans’ Instagram Photos
Team Zukerberg’s “SEER” might be learning via your holiday photos — unless you’re protected by the GDPR.
Facebook has announced an AI breakthrough: its slightly-terrifyingly-named “SEER” (SElf-supERvised) AI model has been trained using unlabelled photos, supposedly removing the requirement for human intervention.
But Facebook chose to exclude EU users (and presumably those from the UK and the wider EEA), due to protections afforded by the GDPR.
This is a noteworthy decision. Tech firms often claim this type of aggregate data processing doesn’t constitute an invasion of privacy.
So why exclude those people protected by the world’s most powerful privacy law?
Non-Europeans should be asking whether this is what they reasonably expected when they signed up for Instagram.
Apple Under Antitrust Investigation in the UK
Apple is facing a probe over its App Store rules. The company’s dominance reinforces — and further necessitates — its strict grip on iOS developers.
The UK's antitrust regulator is investigating Apple over its app store terms. We need big tech firms to strictly regulate content on their platforms. But we only "need" this because these platforms are so dominant.
The Competition and Markets Authority (CMA) says it is concerned that Apple's Terms and Conditions for app developers are "unfair and anti-competitive".
The CMA's investigation will consider whether developers should have to agree to certain terms before launching their apps in the App Store, and Apple's rules on in-app payments.
The CMA cites the fact that App Store apps are subject to pre-approval by Apple as a matter that is pertinent to the investigation.
These rules benefit Apple. But they also, to some extent, benefit consumers. Many iPhone users like the fact that App Store apps tend to be of reasonable quality and tend to have strong security.
The CMA is also investigating Google's phasing out of third-party cookies. I'm all for this change — third-party cookies are bad for privacy (although I have my reservations about Google’s new advertising method).
But Google is so huge that many thousands of its competitors are dependent on its policies, making this an antitrust issue.
If Apple and Google weren't such dominant market players, this stuff wouldn't matter so much. Developers could find another app store. Advertisers could use other networks.
But regulators have let these companies get so huge that everything they do matters disproportionately.
South African Information Regulator Demands Answers From Facebook Over WhatsApp Terms
South Africa’s regulator wants to know why Facebook didn’t seek prior consultation over WhatsApp terms changes. Perhaps because the law isn’t yet in force…?
We all know WhatsApp is changing its terms. Thanks to the GDPR, European users are largely protected from these changes. But South Africa's Information Regulator is asking: Why aren't South Africans protected too?
The answer might actually be pretty straightforward.
South Africa's POPIA is a relatively strong privacy law with many similarities to the GDPR. In one area, it's actually even stricter.
Section 57 of the POPIA requires responsible parties (controllers) to seek prior authorization from the Information Regulator where they seek to process personal information:
• For purposes that were not specified at when the information was collected, and • When they aim to link personal information with data from other responsible parties
This seems to be exactly what Facebook has planned for WhatsApp.
So the question is: Why didn't Facebook seek prior authorization from the South African Information Regulator?
Well, there might be a straightforward answer.
Most provisions of the POPIA — including the prior consultation rules —commenced from July 2020, but won't be enforced until July 2021.
So it's not clear that the Information Regulator has much of a case against Facebook, given that the prior consultation rules won't be in force when the changes to WhatsApp occur.
What’s the deal here? Am I confused about POPIA’s commencement schedule (I’ve double-checked)? Did the regulator not realise that South Africa law doesn’t apply here? Or is this a bluff?
I’ve asked the South African Information Regulator. No reply yet.
Virginia Passes Consumer Data Protection Act
Virginia’s new privacy law isn’t all that impressive from a European perspective. But, it just about places Virginia on an even footing with California.
The U.S. took another (small) step towards greater privacy protection this week when Virginia passed the Consumer Data Protection Act (CDPA).
The law has been criticised as a missed opportunity. Its consumer rights are strictly opt-out, except with regard to the collection and use of “sensitive” personal information.
But — for me — some progress is better than none.
The law has been compared to California’s privacy regime and (wrongly) to the GDPR.
Here are four important distinctions between Virgina’s brand-new CDPA and California's also-quite-new California Privacy Rights Act (CPRA).
1) The CDPA has no private right of action. You can’t take a company to court for violating the CDPA. This makes it more popular among industry players and less popular among lawyers.
2) The CDPA is more liberal in the area of non-discrimination. California’s “right to non-discrimination” prevents businesses from imposing a higher price for goods against customers who exercise their consumer privacy rights, This provision was watered down due to concerns from businesses that they would be unable to operate loyalty schemes. Subsequent amendments involved forcing businesses to demonstrate the value of personal information they received in exchange for discounts. The CDPA appears to have taken a less convoluted approach.
3) The CDPA requires businesses to conduct "data protection assessments" in many situations including when conducting targeted advertising. This is the most interesting provision, in my eyes. These assessments are similar to the GDPR’s Data Protection Impact Assessments (DPIAs).
4) The CDPA’s definition of “sensitive personal information” is slightly broader, as it includes data from children under 13 among its categories of "sensitive personal data".
A year or two ago, I wouldn’t have expected to read a high-profile Editorial Board piece in the New York Times advocating a federal, opt-in-consent-based privacy law.
This excellent piece takes a critical look at data collection in the U.S. and criticises Virginia’s new CPDA (which I discussed above).
Americans have become inured to the relentless collection of their personal information online. Imagine, for example, if getting your suit pressed at the dry cleaner's automatically and permanently signed you up to have scores of inferences about you — measurements, gender, race, language, fabric preferences, credit card type — shared with retailers, cleaning product advertisers and hundreds of other dry cleaners, who themselves had arrangements to share that data with others. It might give you pause.
That’s it! Thanks for reading. I’m been overwhelmed by the number of subscribers I’ve received today. See you next Sunday.