Google Gemini Deep Think Wins Gold at IMO 2025
An enhanced version of Google DeepMind’s Gemini AI, known as Gemini Deep Think, has officially achieved gold‑medal standard performance at the 2025 International Mathematical Olympiad (IMO).

This result was certified by the official IMO grading committee, cementing Gemini’s leap from last year’s silver-tier AI systems and signaling deeper strides toward true artificial general intelligence.
The IMO, first held in 1959, is renowned for its difficulty. Each year, up to six math prodigies from each participating country confront six highly challenging problems in algebra, combinatorics, geometry, and number theory over two four‑and‑a‑half‑hour sessions. Medals are awarded to the top half of contestants, with gold medals reserved for the top roughly 8%.
In 2024, DeepMind’s AlphaProof and AlphaGeometry 2 systems reached silver‑level scores by solving four out of six problems, though they relied on translating natural‑language prompts into a formal proof language like Lean and sometimes took days to complete a solution. Gemini Deep Think broke new ground by working entirely in natural language and producing full, clear, rigorous proofs aligned with official IMO standards, within the original 4.5‑hour contest period.
DeepMind attributes this achievement to its novel “Deep Think” reasoning framework, which uses parallel reasoning to explore multiple solution paths simultaneously. The system received additional reinforcement‑learning training, problem‑solving and theorem‑proving data.
IMO President Gregor Dolinar confirmed the official result, praising the model’s submissions for their clarity and precision, while noting that DeepMind is the first working AI to secure an IMO gold score.

OpenAI announced that its experimental reasoning model also solved five of six problems under unofficial evaluation, earning a 35/42 score. However, OpenAI’s results were based on internal grading rather than third‑party verification.
DeepMind’s coordinated effort with IMO organizers, by contrast, marked the first time an AI system was formally assessed under standard competition guidelines.
Want to see more of our stories on Google?
P.S. Want to keep this site truly independent? Support us by buying us a beer, treating us to a coffee, or shopping through Amazon here. Links in this post are affiliate links, so we earn a tiny commission at no charge to you. Thanks for supporting independent Canadian media!
Yeah, right! Gemini can't even create an accurate formula for Google Sheets. I have better luck with ChatGPT or Bing Co-Pilot.