Google DeepMind’s new reasoning model Gemini “Deep Think” has become the first AI system to earn an official gold medal at the International Mathematical Olympiad, matching the world’s top teenage mathematicians by solving five of the contest’s six problems (35 points out of 42) within the 4.5‑hour window and writing its proofs entirely in natural language. IMO graders certified the solutions as flawless, while competition president Gregor Dolinar hailed them as “clear, precise and astonishing.” The feat eclipses last year’s silver‑level showing by DeepMind’s AlphaGeometry pipeline and highlights a steep year‑on‑year jump in AI reasoning power.
Deep Think’s breakthrough rests on a “parallel‑thinking” search that explores multiple solution paths simultaneously and a training curriculum packed with thousands of curated Olympiad proofs. By operating end‑to‑end in plain English, the system sidesteps the formal‑logic scaffolding earlier AI solvers needed, marking a decisive shift toward models that reason the way humans write. Brown University mathematician and former IMO champion Junehyuk Jung, now a visiting researcher at DeepMind, told Reuters the result suggests AI is “less than a year away from tackling unsolved research problems” in pure math and potentially physics or computer science.
The gold also intensifies an arms race with OpenAI, which self‑published an unverified claim of matching performance two days before DeepMind’s announcement. DeepMind waited for official board verification, a point CEO Demis Hassabis underscored on X, noting the lab “respected the IMO Board’s request that all AI labs share results only after independent experts had signed off”.
Gemini Deep Think will first reach vetted mathematicians and academic partners, with a mass roll‑out promised for Gemini Ultra subscribers later this year. For Google, the win showcases a model that is both multimodal and logically rigorous—an essential selling point as enterprise buyers weigh Gemini against OpenAI’s forthcoming GPT‑5. For the research community, it signals that large‑language models are crossing from pattern matching into genuine problem‑solving and could soon collaborate on proofs that have eluded humans for decades. Whether Gemini or a rival system is the first to crack an open conjecture, the 2025 Olympiad may be remembered as the moment AI earned its place on the world’s most prestigious mathematical leaderboard—and gave a preview of the tools scholars will wield before the decade is out.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.