Collision in New Child Abuse Detection Algorithm ‘Not a Concern’ Says Apple

A new collision discovered by researchers in Apple’s newly announced CSAM detection system has raised new concerns about the integrity of the system. However, the iPhone maker believes the finding is not a concern, The Verge is reporting.

Apple logo

Highlighted by a GitHub user named Asuhariet Ygvar, the flaw affects the hashing algorithm, called NeuralHash, which allows Apple to check for exact matches of known child-abuse imagery without possessing any of the images or gleaning any information about non-matching pictures.

Ygvar has posted a code for a reconstructed Python version of NeuralHash, which he claims to have reverse-engineered from previous versions of iOS. The resulting algorithm is a generic version of NeuralHash rather than the specific algorithm that will be used once the proposed CSAM system is deployed.

“Early tests show that it can tolerate image resizing and compression, but not cropping or rotations,” Ygvar wrote on Reddit, sharing the new code.

Shortly afterward, a user called Cory Cornelius produced a collision in the algorithm: two images that generate the same hash. It’s a significant finding, although Apple says additional protections in its CSAM system will prevent it from being exploited.

In response, Apple has said that if an image that produced a NeuralHash collision were flagged by the system, it would be checked against the secondary system and identified as an error before reaching human moderators.

P.S. Help support us and independent media here: Buy us a beer, Buy us a coffee, or use our Amazon link to shop.