EagleNet: Energy-Aware Fine-Grained Relationship Learning Network for Text-Video Retrieval

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing text-video retrieval methods that overlook contextual interactions among video frames, leading to suboptimal alignment between textual representations and video semantics. To overcome this, we propose EagleNet, which constructs a text-frame graph to explicitly model inter-frame dependencies and text-frame interactions through a fine-grained relation learning (FRL) mechanism. Furthermore, an energy-aware matching (EAM) strategy is introduced to refine cross-modal alignment, while a sigmoid-based contrastive loss function enhances training stability. Extensive experiments demonstrate that EagleNet achieves state-of-the-art performance across four benchmark datasets—MSRVTT, DiDeMo, MSVD, and VATEX—validating its effectiveness and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Text-video retrieval tasks have seen significant improvements due to the recent development of large-scale vision-language pre-trained models. Traditional methods primarily focus on video representations or cross-modal alignment, while recent works shift toward enriching text expressiveness to better match the rich semantics in videos. However, these methods use only interactions between text and frames/video, and ignore rich interactions among the internal frames within a video, so the final expanded text cannot capture frame contextual information, leading to disparities between text and video. In response, we introduce Energy-Aware Fine-Grained Relationship Learning Network (EagleNet) to generate accurate and context-aware enriched text embeddings. Specifically, the proposed Fine-Grained Relationship Learning mechanism (FRL) first constructs a text-frame graph by the generated text candidates and frames, then learns relationships among texts and frames, which are finally used to aggregate text candidates into an enriched text embedding that incorporates frame contextual information. To further improve fine-grained relationship learning in FRL, we design Energy-Aware Matching (EAM) to model the energy of text-frame interactions and thus accurately capture the distribution of real text-video pairs. Moreover, for more effective cross-modal alignment and stable training, we replace the conventional softmax-based contrastive loss with the sigmoid loss. Extensive experiments have demonstrated the superiority of EagleNet across MSRVTT, DiDeMo, MSVD, and VATEX. Codes are available at https://github.com/draym28/EagleNet.
Problem

Research questions and friction points this paper is trying to address.

text-video retrieval
cross-modal alignment
frame contextual information
text expressiveness
vision-language pre-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-Grained Relationship Learning
Energy-Aware Matching
Text-Video Retrieval
Context-Aware Text Embedding
Sigmoid Contrastive Loss
Y
Yuhan Chen
Sun Yat-sen University
P
Pengwen Dai
Shenzhen Campus of Sun Yat-sen University; Shenzhen Key Laboratory of Adversarial Artificial Intelligence
Chuan Wang
Chuan Wang
School of Artificial Intelligence, Beijing Normal University
Quantum opticsQuantum informationNano- and micro- photonics
D
Dayan Wu
Institute of Information Engineering, CAS
Xiaochun Cao
Xiaochun Cao
Sun Yat-sen University
Computer VisionArtificial IntelligenceMultimediaMachine Learning