Apple Secretly Released an Open Source Multimodal LLM

Back in October, Apple made a discreet move by releasing an open-source multimodal LLM, named ‘Ferret,’ in collaboration with researchers from Columbia University (via VentureBeat).

Siri

Initially, this release garnered minimal attention, but recent events hint at a shift in its reception.

Open-source models from Mistral and Google’s forthcoming Gemini model for Pixel Pro and Android have recently sparked conversations about the potential of local LLMs to empower smaller devices.

The buzz intensified after Apple disclosed a significant breakthrough in deploying LLMs on iPhones.

The company introduced two research papers showcasing novel techniques for 3D avatars and efficient language model inference.

These advancements could potentially enable more immersive visual experiences and allow complex AI systems, such as ‘Apple GPT,’ to operate on consumer devices like iPhones and iPads.

Siri hero 2023

The AI community, belatedly recognizing the Ferret release, celebrated Apple’s unexpected foray into the open-source LLM landscape.

Bart de Witte, leading a European non-profit dedicated to open-source AI in medicine, shared his surprise on X this morning.

He highlighted Apple’s commitment to impactful AI research through Ferret’s introduction and anticipated the integration of local LLMs into iOS, transforming the user experience.

Interestingly, news of Apple’s open-source and local ML advancements coincides with reports of Anthropic and OpenAI seeking substantial funding for their proprietary LLM development.

Reuters reported Anthropic’s negotiations to secure $750 million from Menlo Ventures, while Bloomberg stated OpenAI’s discussions for funding exceeding a $100 billion valuation.

P.S. Help support us and independent media here: Buy us a beer, Buy us a coffee, or use our Amazon link to shop.