Apple Introduces New Features for Cognitive Accessibility

Apple has announced an array of new features focused on cognitive accessibility. Arriving via iOS 16, this new software suite includes accessible functions for vision, hearing, and mobility. Plus users who may be nonspeaking or at risk of losing their voice will also find new benefits.

In a press release, Apple announces that the update utilizes on-device machine learning as well as hardware and software innovations to make these features come to life. The Cupertino company has worked in collaboration with community groups with a broad spectrum of disabilities in order to develop these features.

The first feature is relative to visual accessibility. Apple’s new Point to Speak feature is able to point out an object with text and it and the system will read it out loud. This option is available as part of the Detection Mode within Magnifier. Users can look for assistance when reading text in a book, a sign in a store, etc. It can also be used with other Magnifier features such as People Detection, Door Detection, and Image Descriptions to better help those navigating physical spaces.

“At Apple, we’ve always believed that the best technology is technology built for everyone,” said Tim Cook, Apple’s CEO says in a statement. “Today, we’re excited to share incredible new features that build on our long history of making technology accessible, so that everyone has the opportunity to create, communicate, and do what they love.”

Assistive Access is another accessibility feature being highlighted by Apple. Those with cognitive disabilities can use this feature to simplify and streamline first-party and third-party apps that can assist in their day-to-day life. To get rid of what can be an overwhelming experience, Apple is working with trusted supporters and listening to feedback to create a welcoming user experience. As part of this, Apple is creating a customized experience for Phone and FaceTime called the Calls app. This offers a distinct UI with larger text and high-contrast buttons. The same is being applied to Messages, Camera, Photos, and Music.

Another aspect Apple is focusing on is cognitive speech. Apple reveals that two new features are in development dubbed Live Speech and Personal Voice. The former of which is a type-to-speech function in which users can type out what they’d like to say and the software will read it out loud, even during FaceTime calls. Live Speech also includes the ability to save commonly-used phrases. Personal Voice, on the other hand, is built for those who may be at risk of losing their voice or suffer from a condition like ALS. This feature enables users to record their voice using text prompts for up to 15 minutes. Machine learning then keeps this information (private and secure) and integrates it seamlessly with features like Live Speech.

As of now, Apple’s new features are being previewed. However, the company claims that even more features are coming later this year. This includes the ability to pair Made for iPhone hearing devices to Mac. Voice Control guide and Switch Control are also on the way. Additionally, the ability to automatically pause GIFs in Messages and Safari is in development.

P.S. Help support us and independent media here: Buy us a beer, Buy us a coffee, or use our Amazon link to shop.