Apple’s ‘Differential Privacy’ Still Collects Too Much Specific Data: Study
Apple’s use of “differential privacy,” a method that inserts random noise into data as it’s collected en masse, doesn’t go far enough to protect personal information, a new report found this week.
According to a new study, researchers from the University of Southern California, Indiana University and China’s Tsinghua University evaluated how Apple injects static into users’ identifiable info, from messages to your internet history, to baffle anyone looking at the data, from the government to Apple’s own staff.
“Apple’s privacy loss parameters exceed the levels typically considered acceptable by the differential privacy research community,” said USC professor Aleksandra Korolova.
The metric for measuring a setup’s differential privacy effectiveness is called a “privacy loss parameter” or, as a variable, “epsilon.” In this case, the researchers discovered that Apple’s epsilon on MacOS allowed a lot more personal data to be identifiable than digital privacy theorists are comfortable with, and iOS 10 permits even more.
macOS is said to have an epsilon of 6, while iOS 10 sits at 14. By comparison, Google claims the differential privacy system in Chrome has an epsilon of 2 in most cases, and a lifetime ceiling of 8 to 9. Google also open-sources related code, making it possible to double-check.
In response to the study, Apple said it disagrees with many points, such as to what degree it can correlate data with a particular person. The company insisted that it varies noise based on the type of data, and that the researchers simply combined epsilons for all types on the assumption it could be pieced together.
The study found that the iOS 11 beta had an epsilon of 43, but that’s likely because of normal testing designed to weed out bugs before the software’s September 19 launch.