π€ AI Summary
To address the challenges of contextual modeling and detail fidelity in long-video understanding and high-resolution image analysis, this paper introduces the Eagle series of vision-language models. Methodologically, we propose (1) an automatic degradation sampling and region-preserving mechanism that jointly ensures semantic coherence and local detail retention; (2) Eagle-Video-110Kβthe first long-video dataset integrating both story-level and clip-level annotations; and (3) a suite of training and inference techniques, including long-context post-training, multi-granularity annotation alignment, and efficient visual token compression with adaptive resampling. Evaluated on Video-MME with 512-frame inputs, Eagle 2.5-8B achieves 72.4%, matching the performance of GPT-4o and leading open-source models (e.g., 72B/78B variants), while substantially advancing modeling capability for ultra-long multimodal sequences.
π Abstract
We introduce Eagle 2.5, a family of frontier vision-language models (VLMs) for long-context multimodal learning. Our work addresses the challenges in long video comprehension and high-resolution image understanding, introducing a generalist framework for both tasks. The proposed training framework incorporates Automatic Degrade Sampling and Image Area Preservation, two techniques that preserve contextual integrity and visual details. The framework also includes numerous efficiency optimizations in the pipeline for long-context data training. Finally, we propose Eagle-Video-110K, a novel dataset that integrates both story-level and clip-level annotations, facilitating long-video understanding. Eagle 2.5 demonstrates substantial improvements on long-context multimodal benchmarks, providing a robust solution to the limitations of existing VLMs. Notably, our best model Eagle 2.5-8B achieves 72.4% on Video-MME with 512 input frames, matching the results of top-tier commercial model such as GPT-4o and large-scale open-source models like Qwen2.5-VL-72B and InternVL2.5-78B.