Researchers say Apple’s scheme to detect child abuse creates serious privacy and security risks.
A new report from the New York Times states that researchers have concluded that Apple’s plan to automatically scan photos to detect child abuse would unduly risk the privacy and security of law-abiding citizens and could open up the way to surveillance.
In a 46-page study, explains the Times, the researchers wrote that the proposal by Apple, aimed at detecting images of child sexual abuse on iPhones, as well as an idea forwarded by members of the European Union to detect similar abuse and terrorist imagery on encrypted devices in Europe, used “dangerous technology.”
“It should be a national-security priority to resist attempts to spy on and influence law-abiding citizens,” the researchers wrote.
“[Device scanning] makes what was formerly private on a user’s device potentially available to law enforcement and intelligence agencies, even in the absence of a warrant,” said the authors. “Because this privacy violation is performed at the scale of entire populations, it is a bulk surveillance technology.”
They write that the technology, which introduces background software on users’ devices, “tears at the heart of privacy of individual citizens” but is also fallible and could be evaded by those meant to be targeted, and misused.
Announced in August, the planned features include client-side (i.e. on-device) scanning of users’ iCloud Photos libraries for Child Sexual Abuse Material (CSAM), Communication Safety to warn children and their parents when receiving or sending sexually explicit photos, and expanded CSAM guidance in Siri and Search.
Client-side scanning (CSS) involves analyzing data on a mobile device or personal computer prior to the application of encryption for secure network transit or remote storage. CSS in theory provides a way to look for unlawful content while also allowing data to be protected off-device.
Read the entire report over at the New York Times.