🤖 AI Summary
In text-to-video retrieval, inter-modal distribution discrepancies (the modality gap) and batch-sampled hard negatives jointly induce gradient conflicts under the InfoNCE loss, undermining stable contrastive alignment. To address this, we propose Gap-Aware Retrieval (GARE): a novel framework that introduces learnable pairwise semantic gap increments Δ_ij and models the optimal descent direction via first-order Taylor approximation under trust-region constraints. We further incorporate directional diversity regularization and an information bottleneck constraint to enhance model interpretability and generalization. GARE requires no additional data or pretraining, achieving end-to-end optimization through a lightweight neural gap module. Evaluated on four standard benchmarks—MSR-VTT, DiDeMo, TGIF, and YouCook2—GARE consistently improves retrieval accuracy (average +2.3% R@1) and demonstrates robust alignment under noisy supervision, empirically validating its effectiveness in resolving gradient conflicts.
📝 Abstract
Recent advances in text-video retrieval have been largely driven by contrastive learning frameworks. However, existing methods overlook a key source of optimization tension: the separation between text and video distributions in the representation space (referred to as the modality gap), and the prevalence of false negatives in batch sampling. These factors lead to conflicting gradients under the InfoNCE loss, impeding stable alignment. To mitigate this, we propose GARE, a Gap-Aware Retrieval framework that introduces a learnable, pair-specific increment Delta_ij between text t_i and video v_j to offload the tension from the global anchor representation. We first derive the ideal form of Delta_ij via a coupled multivariate first-order Taylor approximation of the InfoNCE loss under a trust-region constraint, revealing it as a mechanism for resolving gradient conflicts by guiding updates along a locally optimal descent direction. Due to the high cost of directly computing Delta_ij, we introduce a lightweight neural module conditioned on the semantic gap between each video-text pair, enabling structure-aware correction guided by gradient supervision. To further stabilize learning and promote interpretability, we regularize Delta using three components: a trust-region constraint to prevent oscillation, a directional diversity term to promote semantic coverage, and an information bottleneck to limit redundancy. Experiments across four retrieval benchmarks show that GARE consistently improves alignment accuracy and robustness to noisy supervision, confirming the effectiveness of gap-aware tension mitigation.