Apple Introduces ARKit, Machine Learning Frameworks for Developers

Apple has unveiled a new machine learning framework API named Core ML, and an augmented reality ARKit for developers at WWDC 2017. With Core ML, Apple is offering a new foundational machine learning framework used across all Apple products, including Siri, Camera, and QuickType, while the ARKit allows developers to easily create augmented reality experiences for iPhone and iPad.

ARKit1

Apple says that Core ML is built on top of low level technologies like Metal and Accelerate, which allows it to seamlessly take advantage of the CPU and GPU to provide maximum performance and efficiency. Developers can easily build computer vision machine learning features into their apps. Supported features include face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking, and image registration.

With ARKit, iPhone and iPad can analyze the scene presented by the camera view and find horizontal planes in the room.

ARKit can detect horizontal planes like tables and floors, and can track and place objects on smaller feature points as well. ARKit also makes use of the camera sensor to estimate the total amount of light available in a scene and applies the correct amount of lighting to virtual objects.

ARKit runs on the Apple A9 and A10 processors. These processors deliver breakthrough performance that enables fast scene understanding and lets you build detailed and compelling virtual content on top of real-world scenes. You can take advantage of the optimizations for ARKit in Metal, SceneKit, and third-party tools like Unity and Unreal Engine.

Developers can download the latest beta of Xcode 9, which includes the iOS 11 SDK with Core ML and AR features.

P.S. Help support us and independent media here: Buy us a beer, Buy us a coffee, or use our Amazon link to shop.