Apple’s Process of Intercepting Child Abuse Imagery Revealed in New Report

A new Forbes report regarding a warrant filed in Seattle, Washington, has revealed in-part how Apple uses technology to “intercept” emails that may contain child abuse imagery.

A search warrant obtained by the publication reveals that despite multiple reports of Apple being unhelpful in serious law enforcement cases, the Cupertino company has played an important role in investigations. Specifically, checking messages when illegal material like child abuse has been flagged and providing data on the iCloud user, including their name, address, and mobile phone number that the user consented to submit when they signed up.

The report notes that Apple uses hashes, much like Facebook and Google, to detect child abuse imagery:

Think of these hashes as signatures attached to previously-identified child abuse photos and videos. When Apple systems – not staff – see one of those hashes passing through the company’s servers, a flag will go up. The email or file containing the potentially illegal images will be quarantined for further inspection.

When Apple’s servers detect a hashed piece of content, they flag the message or the file and immediately quarantine it, so it doesn’t reach the intended recipient. The company then tips off the relevant authorities or law enforcement, and an Apple employee inspects the quarantined content along with any attached messages and compiles a small report.

Once confirmed as illegal, it will be sent to relevant authorities, such as the National Center for Missing and Exploited Children (NCMEC).

This method is not applicable in the case of encrypted content and seems to pertain only to emails sent through Apple’s servers. As the report notes, it’s a server, not employees, that screen all emails that pass through it, and employees only see emails that have been flagged as containing signatures that could point to child abuse imagery in their content.