Rebalancing Contrastive Alignment with Learnable Semantic Gaps in Text-Video Retrieval

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-to-video retrieval, inter-modal distribution discrepancies (the modality gap) and batch-sampled hard negatives jointly induce gradient conflicts under the InfoNCE loss, undermining stable contrastive alignment. To address this, we propose Gap-Aware Retrieval (GARE): a novel framework that introduces learnable pairwise semantic gap increments Δ_ij and models the optimal descent direction via first-order Taylor approximation under trust-region constraints. We further incorporate directional diversity regularization and an information bottleneck constraint to enhance model interpretability and generalization. GARE requires no additional data or pretraining, achieving end-to-end optimization through a lightweight neural gap module. Evaluated on four standard benchmarks—MSR-VTT, DiDeMo, TGIF, and YouCook2—GARE consistently improves retrieval accuracy (average +2.3% R@1) and demonstrates robust alignment under noisy supervision, empirically validating its effectiveness in resolving gradient conflicts.

Technology Category

Application Category

📝 Abstract
Recent advances in text-video retrieval have been largely driven by contrastive learning frameworks. However, existing methods overlook a key source of optimization tension: the separation between text and video distributions in the representation space (referred to as the modality gap), and the prevalence of false negatives in batch sampling. These factors lead to conflicting gradients under the InfoNCE loss, impeding stable alignment. To mitigate this, we propose GARE, a Gap-Aware Retrieval framework that introduces a learnable, pair-specific increment Delta_ij between text t_i and video v_j to offload the tension from the global anchor representation. We first derive the ideal form of Delta_ij via a coupled multivariate first-order Taylor approximation of the InfoNCE loss under a trust-region constraint, revealing it as a mechanism for resolving gradient conflicts by guiding updates along a locally optimal descent direction. Due to the high cost of directly computing Delta_ij, we introduce a lightweight neural module conditioned on the semantic gap between each video-text pair, enabling structure-aware correction guided by gradient supervision. To further stabilize learning and promote interpretability, we regularize Delta using three components: a trust-region constraint to prevent oscillation, a directional diversity term to promote semantic coverage, and an information bottleneck to limit redundancy. Experiments across four retrieval benchmarks show that GARE consistently improves alignment accuracy and robustness to noisy supervision, confirming the effectiveness of gap-aware tension mitigation.
Problem

Research questions and friction points this paper is trying to address.

Addressing modality gap in text-video retrieval contrastive learning
Mitigating false negatives impact in batch sampling
Resolving conflicting gradients under InfoNCE loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable semantic gaps mitigate gradient conflicts
Lightweight neural module enables structure-aware correction
Regularization stabilizes learning and enhances interpretability
🔎 Similar Papers
No similar papers found.
Jian Xiao
Jian Xiao
School of Computer Science and Information Engineering, Hefei University of Technology
multimodalvision and languagetext-video retrieval
Zijie Song
Zijie Song
Anhui University
Multimedia
J
Jialong Hu
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
H
Hao Cheng
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
Zhenzhen Hu
Zhenzhen Hu
Hefei University of Technology
Multimedia
J
Jia Li
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
Richang Hong
Richang Hong
Hefei University of Technology
MultimediaPattern Recognition