Apple is working on improving Siri’s understanding of names for regional points-of-interest by incorporating the user’s location.
Apple Thursday published a new entry on its Machine Learning Journal blog which goes highly technical on Siri’s ability to recognize obscurely-named points of interest (POIs) on a map, like local restaurants and businesses.
Basically, Apple has been able to build customized language models that make use of the user’s immediate location, known as Geo-LMs, which improve the overall accuracy of Siri’s speech recognition system.
“The accuracy of automatic speech recognition systems has improved phenomenally over recent years due to the widespread adoption of deep learning techniques,” notes the post.
“Performance improvements have, however, mainly been made in the recognition of general speech; whereas accurately recognizing named entities, like small local businesses, has remained a performance bottleneck.”
After determining an iPhone’s current longitude and latitude, Siri can then cross-reference a POI database corresponding to one of 169 U.S. geographic regions, matching a user’s request against locally available options.
Apple then trains a machine with 5,000 hours of speech data and uses a language model that can be frequently updated with new POIs as businesses open and close.
The Siri Speech Recognition Team says the approach works independently of language as well, so it can be applied to locations beyond U.S. English.
Read the full entry on Apple’s Machine Learning Journal here.