Data Protection: Is Google About to Close the Open Web?
Google's FLoC plans, EU privacy tensions, T-Mobile data sell-off, Apple privacy investigation, UK data protection divergence.
In #2, covering 8-14 March 2021, I’ll be talking you through these five stories:
Google’s FLoC proposals are provoking antitrust and privacy concerns
A split could be emerging among EU institutions on privacy
T-Mobile plans to start selling U.S. customers’ web usage data on an opt-out basis
Apple is under its third ongoing EU privacy investigation, this time in France
The U.K. has reiterated its intention to depart from EU data protection standards
I’ve also recommended an article at the end of the newsletter.
FLoC: the Final Stage of Google’s Web Takeover?
Many observers say Google’s “private” web proposals are a bold move to consolidate its market dominance.
You’ve probably heard about Google’s plans to overhaul online advertising. Briefly, the plans include:
Removing third-party cookies from Chrome
Bucketing users into “cohorts” based on online behaviours (rather than individual targeting)
Building protections against device fingerprinting
On the face of it, this sounds reasonable. But as Google reveals about its proposals, tech-watchers are increasingly cynical.
Concerns centre around both antitrust and privacy.
On the antitrust front, the U.K.’s Competition and Markets Authority (CMA) has been investigating Google’s plans since January, over concerns that Google’s plans would consolidate a dominant market position.
For my article about the CMA’s Google investigation, I spoke to Michelle Meagher, author of Competition Is Killing Us, who argues Google is using privacy as an “excuse” for monopolistic behaviour:
“What we’re seeing is Google attempting to use privacy as a shield or an excuse for consolidating its stranglehold over online advertising."
“Google’s vision is for our privacy to be entirely protected — by them.”
The CMA’s investigation was triggered by the group Marketers for an Open Web, who claim the proposals would “effectively create a Google-owned walled garden that would close down the competitive, vibrant open web.”
Ken Glueck, executive vice president of Oracle, expressed a similar sentiment in a blog post last Sunday titled Google’s Privacy Sandbox—We’re all FLoCed:
Google’s sandbox is little more than an attempt at using privacy as a pretext to solidify its dominance. It creates anticompetitive rules for everyone to abide by, except for Google. Third parties—some people call them competitors—will be in the dark, but first parties—that would be Google—will have a 20/20 view into every consumer’s likes, desires, and location, to sell ads.
Side note: While Glueck provides an excellent overview of the concerns around Google’s Federated Learning of Cohorts (FLoCs), it’s important to note that Oracle derives significant value from cookies. I interviewed Rebecca Rumbul last year, who is bringing a case against Oracle over its alleged misuse of cookies.
Antitrust aside, there’s also considerable concern about whether Google’s plans are, in fact, good for privacy.
In his article 4 Big Questions About Google’s New Privacy Position, Johnny Ryan of the Irish Civil Liberties Council pointed out that we don’t yet know a lot about how private FLoCs will be:
Google’s new ad system will group people who share similar advertising targeting characteristics into “interest groups”. But it has not yet defined the minimum threshold (“k-anonymity threshold”) for the size of an interest group and the degree of uniqueness of characteristics of people within it.
Cohort size matters: smaller cohorts increase the likelihood that individuals can be identified.
In The Privacy Mirage, Eric Benjamin Seufert argued that Google isn’t really improving privacy at all—just redefining the concept so as to align with its business practices:
…by artificially defining “privacy” as the distinction between first- and third-party data usage, the largest platforms simply entrench their market positions… In this way, “privacy” is a mirage: the largest platforms define privacy such that it is always just one big, sweeping change away from being achieved.
Third-party cookies are bad for privacy. But is Google’s increased dominance worse? Perhaps the answer lies in restricting or amending Google’s proposals to ensure better competition (as interoperability research Ian Brown suggested to me).
The coming months will see Google release more information regarding its changes to online advertising. The proposals will require close scrutiny.
EU Institutions At Odds Over Privacy
The European Data Protection Board stands strong on privacy, while Commission, Parliament, and the Council seek to undermine the confidentiality of communications.
The European Data Protection Board (EDPB) held its 46th plenary session this week. The EDPB Chair, Andrea Jelinek said:
“The ePrivacy Regulation must not —under no circumstances [sic]—lower the level of protection offered by the current ePrivacy Directive, and should complement the GDPR by providing additional strong guarantees for confidentiality and protection of all types of electronic communication.”
There's a very different mood in the European Parliament, which recently voted overwhelmingly in favour the Commission’s controversial proposed ePrivacy derogation:
The “chat control” law (as the European Pirate Party is calling it) is a proposed temporary derogation to the ePrivacy Directive that would oblige email and messaging platforms to scan all communications and report suspected child sexual abuse material to law enforcement authorities.
This legislation should be taken together with the Council’s proposed ePrivacy Regulation, published in February, which would permit the processing of (pseudonymised) communications metadata for law enforcement purposes.
In a statement on Tuesday, the EPDB showed clear disapproval of any watering down of rights in the ePrivacy Regulation:
Any possible attempt to weaken encryption, even for purposes such as national security would completely devoid those protection mechanisms due to their possible
unlawful use. Encryption must remain standardized, strong and efficient
Writing for Lawfare, Theodore Christakis and Kenneth Propp explained how France has been pushing for broad national security exemptions in the ePrivacy Regulation, allegedly to avoid complying with the October Court of Justice of the European Union judgements in Privacy International and LQDN.
Are these proposals compatible with the EU Charter of Fundamental Rights? If not, we could see the EU’s lawmaking institutions pitted against the CJEU, with the EDPB shaking its fist in the background.
T-Mobile to Sell US Customers’ Web Usage Data on Opt-Out Basis
Telecoms giant will start selling information about how customers use the web and what apps are on their phones unless they opt out.
In the previous edition of Data Protection, I recommended reading the New York Times editorial on opt-in consent: America, Your Privacy Settings Are All Wrong.
This piece argued that the U.S. should be enacting a privacy law with an "opt-in" model of consent. It suggests that Virginia and California have missed this opportunity with recent legislation.
This same week, T-Mobile announced that it will start selling U.S. customers’ data on an opt-out basis, starting on 26 April.
The NYT's editorial has been getting a lot of pushback from U.S. readers — much of which is well-founded, arguing for a more principles-led, “data fiduciary” role for businesses.
For my part, I think opt-in consent still plays a role in data protection, and I think criticizing states for choosing “opt-out” models is valid. On Monday, I defended the concept of opt-in consent in a LinkedIn article, In Defence of the NYT's Opt-In Consent Editorial.
The T-Mobile case is an example of where opt-in consent would have worked well.
The company will use information including “web and device usage data” (which includes information about the apps installed on people's phones) to target first and third-party ads.
Customers can opt out, but some say the process is difficult. Here’s Harvard researcher Elettra Bietti describing her attempt:
Bietti, incidentally, makes an excellent case for moving beyond a “notice and consent” model of privacy law.
In the U.S., ISPs are allowed to sell their customers’ browsing history (thanks in part to the Trump administration, which killed the FCC’s broadband rules in 2017).
Short of banning this practice altogether, states passing privacy laws can help protect their citizens’ privacy by prohibiting the sale of all personal information without opt-in consent.
In fact, Virginia’s new state privacy law does prohibit the sharing or sale of “sensitive personal information” without opt-in consent. Sensitive personal information includes people’s “precise geolocation.”
T-Mobile’s new policy, as it happens, doesn’t apply to “precise location data”:
A couple of things that are not changing — We do not use or share… precise location data for advertising unless you give us your express permission…
Perhaps if opt-in consent applied to the sale of other types of data — even in just a few states — T-Mobile customers’ web usage data might have been spared.
Apple Facing Another Investigation in Europe
Big tech’s most privacy-focused company is facing three data protection complaints and one antitrust investigation
Privacy is a major part of Apple's brand. So you might be surprised to learn that the company is dealing with multiple investigations by EU data protection authorities over allegations that it is breaching privacy law.
Most recently, Apple was referred to France's CNIL over allegations that ad personalisation is turned "on" by default in iOS 14. The group behind the complaint, France Digitale, claims that this violates the ePrivacy Directive and the GDPR.
Back in December, I wrote about a similar complaint filed with data protection authorities in Berlin and Spain by privacy group Noyb.
Noyb says Apple shouldn't be installing its ID for Advertisers (IDFA) on millions of iPhones without consent, arguing that it violates Article 5(3) of the ePrivacy Directive.
Here’s what Bennett Cyphers of the Electronic Frontiers Foundation (EFF) told me about Apple’s IDFA:
“IDFA is a dangerous, privacy-intrusive tool that goes against Apple’s stated concerns about user privacy. It is designed to help advertisers and tracking companies at users’ expense.”
Apple’s changes to iOS mean that users will be asked to consent to being tracked via the IDFA. But the installation of the tracker, Noyb argues, also requires consent.
So that's three ongoing EU privacy investigations into Apple — plus an antitrust investigation in the UK (which I covered in last week’s edition).
It's fair to say that Apple does a better job of preserving people's privacy than certain competitors. But the EU's data protection authorities will need to determine whether the company is acting within EU privacy law.
UK Reiterates Intention to Diverge from EU Data Protection Standards
The U.K.’s culture secretary has repeated his ambiguous claims about the future data protection regime.
Sky News published the glibly-titled Government to reform data protection laws to spur economic growth on Thursday, in which Oliver Dowden, the U.K.’s culture secretary, states that he is “seeking to set out where we are going to go with data” post-Brexit.
The “unashamedly pro-tech” minister has made similar comments in the past, including in a Financial Times op-ed earlier this month, but has been relatively coy about providing solid details.
In one sense, the U.K. can do whatever it likes with its data protection law, now it isn't part of the EU.
But the government can’t move too far away from EU standards without putting the U.K.’s data protection “adequacy decision,” drafted by the European Commission last month, at risk.
Failing to achieve or maintain adequacy would mean more red tape for businesses and, arguably, make British firms less attractive prospective business partners.
The government keeps signaling its intention to liberalise data protection law. In February, Prime Minister Boris Johnson said the U.K. would develop a “separate and independent” data protection policy from the EU.
So what do we know about the U.K.’s plans? Not a lot.
The U.K.-Japan trade deal, concluded in October last year, contained clauses suggesting that the U.K. could be planning to operate two models of data protection—an EU version and a more liberal Asia-Pacific version—according to an article by Javier Ruiz for Open Rights Group.
The U.K.’s continuing disregard for EU principles in its surveillance laws continues to take the country further away from adequacy. While this matter didn’t preclude a draft adequacy decision, it might conceivably cause any final decision to be overturned by the CJEU somewhere down the track.
Then there’s the appointment of the next Information Commissioner, who heads the U.K.’s data protection authority. While Liz Denham’s replacement hasn’t yet been announced, the government has clearly signaled that it hopes to appoint someone who will prioritise innovation (which will most likely come at the expense of enforcement).
There is some for the risk in the U.K. loudly declaring its intention to depart from EU standards when the draft adequacy decision contains a four-year review period
But adequacy means “essential equivalence”—not absolute equivalence. So how much room for manoeuvre do adequacy decision recipients have?
Looking at the list of “adequate” countries, many have data protection regimes that are much less strict than the U.K.’s, including Canada, Israel, and New Zealand. But, as Douwe Korff and Ian Brown point out: these are older decisions that require review by the Commission.
David Erdos argues that some wriggle room is possible, particularly if the U.K. commits to the continued recognition of the Council of Europe’s Convention 108 and complies with the standard of “essential equivalence.”
But move too far, and there is a risk that the adequacy decision goes the way of the U.S. Privacy Shield framework.
Sacrificing the U.K.’s adequacy decision in the name of economic stimulus might be unwise. The UCL European Institute estimates that implementing alternative safeguards for data transfers could cost businesses up to £1.6 billion in compliance costs alone.
Recommended Reading
How Facebook got addicted to spreading misinformation | Karen Hao | MIT Technology Review
This excellent long-read from Karen Hao describes her observations about Facebook’s AI program, which allegedly has failed to devote sufficient resources to prevent the logarithmic promotion of disinformation.
Hao’s piece has caused a significant backlash from Facebook, which Hao and her editorial team have made public on Twitter.
Hao’s article is a detailed and considered insight into some of the most impactful work in tech.
Unpicking the "making children as safe as they are offline" fallacy | Neil Brown | decoded.legal blog
The Information Commissioner recently said:
The internet was not designed for children, but we know the benefits of children going online. We have protections and rules for kids in the offline world – but they haven’t been translated to the online world.
Brown’s excellent blog post demonstrates beyond any doubt that this is a fallacy.