Apple will begin checking photos being uploaded to its iCloud service against a database of Child Sexual Abuse Material (CSAM), in an effort to protect children.
In the battle over encryption — known as the Crypto Wars — governments have often used protecting children as justification for promoting backdoors in encryption and security. Unfortunately, not matter how well-intentioned, as we have highlighted before, there is no way to securely create a backdoor in encryption that will be safe from exploitation by others.
Apple appears to be trying to offer a compromise solution, one that would preserve privacy, while still protecting children.
Apple outlined how its CSAM system will work:
Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.
Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Needless to say, Apple’s announcement has been met with a variety of responses. The Electronic Frontier Foundation (EFF), in particular, has been highly critical of Apple’s decision, even accusing the company of going back on its former privacy stance and embracing backdoors.
The EFF is particularly concerned Apple’s new system could be broadened to include speech, or virtually anything, governments may not approve of. While there is certainly a concern the system could be abused that way, it’s also a far cry from using an on-device method for screening something as vile as CSAM vs using it to monitor speech.
In many ways, Apple’s new approach to combatting CSAM is somewhat similar to its approach to combatting malware. There have been times in the past when Apple took the liberty of proactively removing particularly dangerous malware from devices. Critics could argue that Apple could extend that, at the behest of governments, to removing any programs deemed offense. But that hasn’t happened. Why? Because there’s a big difference between removing malware and censoring applications.
The National Center for Missing & Exploited Children, admittedly a critic of end-to-end encryption, praised Apple’s decision.
“With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,” John Clark, chief executive of the NCMEC, said in a statement, via Reuters. “The reality is that privacy and child protection can co-exist.”
Ultimately, only time will tell if Apple has struck the right balance between privacy and child protection. It’s worth noting Microsoft, Google and Facebook already have similar systems in place, but Apple believes its system offers significant benefits in the realm of privacy.
In addition to going a long way toward protecting children, it’s also possible Apple’s willingness to make this concession will disarm one of the biggest arguments against end-to-end encryption, preserving the technology against legislative action.