Google’s New ML Kit Lets Developers Integrate AI into iOS, Android Apps
Google has today introduced a new software development kit (SDK) called ‘ML Kit’ at its I/O developer conference, that allows iOS and Android app developers to easily integrate pre-built, Google-provided machine learning models into their apps (via TechCrunch).
Google’s AI models support text recognition, face detection, barcode scanning, image labeling and landmark recognition, and are available both online and offline. The company said it will extend the current base set of available APIs with two more in the coming months, including a high-density face contour feature for the face detection API.
“The real game-changer here are the offline models that developers can integrate into their apps and that they can use for free. Unsurprisingly, there is a trade-off here. The models that run on the device are smaller and hence offer a lower level of accuracy. In the cloud, neither model size nor available compute power are an issue, so those models are larger and hence more accurate, too.”
The search engine giant’s group product manager for machine intelligence Brahim Elbouchikhi says a lot of developers will likely do some of the preliminary machine learning inference on the device, and then move to the cloud for improved accuracy. He added that for developers who want to go beyond the pre-trained models, ML Kit also supports TensorFlow Lite models.
Google also touted that its ML Kit, which uses the standard Neural Networks API on Android and its equivalent on Apple’s iOS, is very much a cross-platform product.