🤖 AI Summary
This work addresses the joint advancement of multimodal understanding, long-context reasoning, and intelligent agent capabilities. Method: We propose the Gemini 2.X family of large language models, featuring the first unified integration of multimodal fusion, million-token-scale context modeling, and deep symbolic-neural hybrid reasoning—enabled by hierarchical compute scheduling and lightweight hardware adaptation for cross-platform efficiency. Contributions/Results: (1) We release three model variants—Pro, Flash, and Lite—spanning use cases from three-hour video comprehension to sub-millisecond response latency; (2) Gemini 2.5 Pro achieves state-of-the-art performance on MMLU, HumanEval, and other major benchmarks; (3) The Flash series attains >90% of Pro’s capability with <10% of its parameter count, substantially expanding the Pareto-optimal frontier. These advances accelerate the practical deployment of next-generation embodied intelligent agents.
📝 Abstract
In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal understanding and it is now able to process up to 3 hours of video content. Its unique combination of long context, multimodal and reasoning capabilities can be combined to unlock new agentic workflows. Gemini 2.5 Flash provides excellent reasoning abilities at a fraction of the compute and latency requirements and Gemini 2.0 Flash and Flash-Lite provide high performance at low latency and cost. Taken together, the Gemini 2.X model generation spans the full Pareto frontier of model capability vs cost, allowing users to explore the boundaries of what is possible with complex agentic problem solving.