Apple introduced a new Core ML framework at WWDC earlier this week, which has been designed to enable developers to build smarter apps by embedding them with on-device machine learning capabilities. However, it seems the iPhone maker’s new machine learning framework still has some learning to do.
Holy crap. CoreML is really smart. pic.twitter.com/3n7rTtK176
— Paul Haddad (@tapbot_paul)
Toying around with the Core ML beta, developer Paul Haddad took to Twitter to showcase how well the framework handles computer vision tasks. Using the new built-in screen recording tool, Haddad tested the capacity of Core ML to identify and caption objects in real-time.
While the app appears to accurately recognize certain objects – like a screwdriver, a keyboard and some boxes – it paradoxically struggled to caption the first-generation Mac Pro, misidentifying the desktop system as either a “speaker unit” or a “space heater.”
Regardless of the framework’s inconsistencies, the developer praised the framework’s potential, suggesting that users need to point their cameras “at the right angle” for optimal results. He also noted that Core ML seemed to considerably heat up the device when using the app.
Check out the following video embed showing the new built-in screen recording tool in action:
I wasn’t joking either just got to get it at the right angle. This also really heats up this phone. pic.twitter.com/dvOEXdHHTn
— Paul Haddad (@tapbot_paul) June 8, 2017