Signal President Meredith Whittaker is calling out the EU for what she calls “rhetorical games” designed to mask the bloc’s efforts to destroy end-to-end encryption (E2EE).
Despite leading the world in pro-privacy regulation, the European Union has recently been hell-bent on undermining E2EE, especially in the context of messaging apps and platforms. As President of one of the leading secure messaging platforms, Whittaker is mincing no words in calling out the EU:
End-to-end encryption is the technology we have to enable privacy in an age of unprecedented state and corporate surveillance. And the dangerous desire to undermine it never seems to die. For decades, experts have been clear: there is no way to both preserve the integrity of end-to-end encryption and expose encrypted contents to surveillance. But proposals to do just this emerge repeatedly — old wine endlessly repackaged in new bottles, aided by expensive consultancies that care more about marketing than the very serious stakes of these issues. These embarrassing branding exercises do not, of course, sway the expert community. But too often they work to convince non-experts that the risks of the previous plan to undermine end-to-end encryption are not present in the shiny new proposal. This is certainly how the EU chat control debate has proceeded.
Whittaker goes on to praise the EU Parliament for initially opting to exclude E2EE apps from the EU’s chat control legislation:
In November, the EU Parliament lit a beacon for global tech policy when it voted to exclude end-to-end encryption from mass surveillance orders in the chat control legislation. This move responded to longstanding expert consensus, and a global coalition of hundreds of preeminent computer security experts who patiently weighed in to explain the serious dangers of the approaches on the table — approaches that aimed to subject everyone’s private communications to mass scanning against a government-curated database or AI model of “acceptable” speech and content.
Despite the EU Parliament’s steps, Whittaker says some countries have continued to push for weakening chat encryption, re-branding their efforts to avoid public outcry:
Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.
Whittaker then goes on drop the hammer:
Rhetorical games are cute in marketing or tabloid reporting, but they are dangerous and naive when applied to such a serious topic with such high stakes. So let’s be very clear, again: mandating mass scanning of private communications fundamentally undermines encryption. Full stop. Whether this happens via tampering with, for instance, an encryption algorithm’s random number generation, or by implementing a key escrow system, or by forcing communications to pass through a surveillance system before they’re encrypted. We can call it a backdoor, a front door, or “upload moderation.” But whatever we call it, each one of these approaches creates a vulnerability that can be exploited by hackers and hostile nation states, removing the protection of unbreakable math and putting in its place a high-value vulnerability.
We ask that those playing these word games please stop and recognize what the expert community has repeatedly made clear. Either end-to-end encryption protects everyone, and enshrines security and privacy, or it’s broken for everyone. And breaking end-to-end encryption, particularly at such a geopolitically volatile time, is a disastrous proposition.
The EU’s ‘Rhetorical Games’ Defined
As is often the case when such efforts are discussed, combating child sexual abuse material (CSAM) is the cited justification.
In the initially proposed chat control legislation, the EU would have required messaging platforms to engage in mandatory client-side scanning for CSAM. The EU proposal was much like the method Apple proposed before abandoning it after realizing there was no way to ensure adequate security.
With client-side scanning, images and videos are mathematically identified and compared to a database of known CSAM. The EU also proposed using on-device AI to identify unknown CSAM that is not yet in the database of known material.
After the EU Parliament excluded E2EE messaging platforms from the legislation, lawmakers changed the proposal, describing it as upload moderation. Essentially, if a user wants to share photos or videos via an E2EE messaging platform, they must consent to allow client-side scanning. If they don’t consent, they can still chat, but will not be able to share media.
Why the EU’s Efforts Would Destroy Privacy and Security for All
Unfortunately, neither of the above methods—client-side scanning and AI scanning—are foolproof, and both have been shown to generate false positives, opening the door for innocent people’s lives to be ruined. What’s more, both methods are easily circumvented with minimal effort, casting serious doubt on their efficacy. In addition, such a system could easily be commandeered by oppressive governments or companies to scan for other material.
In fact, prior to Apple’s ill-fated plans to include client-side scanning, researchers at Princeton created a system that was nearly identical to what Apple proposed. After proving it could be done, the Princeton researchers then wrote a paper on why the technology should never be used.
“Our system could be easily repurposed for surveillance and censorship,” the researchers wrote. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.”
While every viable method of combating CSAM should be implemented, countless security experts have warned that weakening E2EE is not a viable method. Not only will it not have a measure impact on combating CSAM, it will also lead to devastating consequences for billions of innocent people around the world. As a result, the Electronic Frontier Foundation (EFF), politicians, legal experts, privacy experts, security experts, civil rights activists, and even human trafficking survivors have objected to various attempts to undermine E2EE, such as Apple’s brief attempt to introduce client-side scanning.
In fact, the EU’s own lawyers warned that the EU’s initial chat control proposal was likely illegal would “would require the general and indiscriminate screening of the data processed by a specific service provider, and apply without distinction to all the persons using that specific service, without those persons being, even indirectly, in a situation liable to give rise to criminal prosecution.”
Those same attorneys said the bloc’s plans must be “proportionate only for the purpose of safeguarding national security” and “it is rather unlikely that similar screening of content of communications for the purpose of combating crime of child sexual abuse would be found proportionate, let alone with regard to the conduct not constituting criminal offences.”
Harvard cryptography professor Matthew Green has called the EU’s plans “the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.”
Only time will tell if the EU Parliament lists to experts and preserves the privacy of billions, or if it continues to play ‘rhetorical games’ in an effort to destroy privacy and security.