Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models

πŸ“… 2025-04-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of contextual modeling and detail fidelity in long-video understanding and high-resolution image analysis, this paper introduces the Eagle series of vision-language models. Methodologically, we propose (1) an automatic degradation sampling and region-preserving mechanism that jointly ensures semantic coherence and local detail retention; (2) Eagle-Video-110Kβ€”the first long-video dataset integrating both story-level and clip-level annotations; and (3) a suite of training and inference techniques, including long-context post-training, multi-granularity annotation alignment, and efficient visual token compression with adaptive resampling. Evaluated on Video-MME with 512-frame inputs, Eagle 2.5-8B achieves 72.4%, matching the performance of GPT-4o and leading open-source models (e.g., 72B/78B variants), while substantially advancing modeling capability for ultra-long multimodal sequences.

Technology Category

Application Category

πŸ“ Abstract
We introduce Eagle 2.5, a family of frontier vision-language models (VLMs) for long-context multimodal learning. Our work addresses the challenges in long video comprehension and high-resolution image understanding, introducing a generalist framework for both tasks. The proposed training framework incorporates Automatic Degrade Sampling and Image Area Preservation, two techniques that preserve contextual integrity and visual details. The framework also includes numerous efficiency optimizations in the pipeline for long-context data training. Finally, we propose Eagle-Video-110K, a novel dataset that integrates both story-level and clip-level annotations, facilitating long-video understanding. Eagle 2.5 demonstrates substantial improvements on long-context multimodal benchmarks, providing a robust solution to the limitations of existing VLMs. Notably, our best model Eagle 2.5-8B achieves 72.4% on Video-MME with 512 input frames, matching the results of top-tier commercial model such as GPT-4o and large-scale open-source models like Qwen2.5-VL-72B and InternVL2.5-78B.
Problem

Research questions and friction points this paper is trying to address.

Enhancing long-context video and image understanding in VLMs
Addressing contextual integrity and visual detail preservation
Improving efficiency in long-context multimodal data training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic Degrade Sampling preserves contextual integrity
Image Area Retention maintains visual details
Efficiency optimizations for long-context training
πŸ”Ž Similar Papers
No similar papers found.