Twitter Using Machine Learning to Auto-Crop Photos Better

In a comprehensive blog post, Twitter has revealed that it is now using machine learning to auto-crop images posted via the service. The company says it is using speedy neural networks to automatically crop picture previews on your timeline to their most interesting part, in order to improve consistency and to allow you to see more Tweets at a glance.

Fig2 png img fullhd medium

Twitter notes that it initially used facial recognition to crop images to faces, but found that this method didn’t work with pictures of scenery, objects, and pets. It then started cropping by focusing on “salient” image regions. To define this they used data from academic studies into eye-tracking, which record what areas of images people look at first. This data can be used to train neural networks and other algorithms to predict what people might want to look at.

The basic idea is to use these predictions to center a crop around the most interesting region. Unfortunately, the neural networks used to predict saliency are too slow to run in production. 

In addition to optimizing the neural network’s implementation, we used two techniques to reduce its size and computational requirements. 

[…] Together, these two methods allowed us to crop media 10x faster than just a vanilla implementation of the model and before any implementation optimizations. This lets us perform saliency detection on all images as soon as they are uploaded and crop them in real-time.

Twitter’s latest updates are currently in the process of being rolled out to everyone on web, iOS and Android.

P.S. Help support us and independent media here: Buy us a beer, Buy us a coffee, or use our Amazon link to shop.