π€ AI Summary
This work addresses catastrophic forgetting in continual text-to-video retrieval, which arises from intra-modal feature drift and the breakdown of cross-modal alignment. To mitigate this, the paper introduces equiangular tight frame (ETF) geometry into cross-modal continual learning for the first time, proposing a structured alignment mechanism. By jointly constraining the evolution of text and video features under a unified ETF prior through a cross-modal ETF alignment loss and a relation-preserving loss, the method effectively suppresses non-collaborative feature drift. Extensive experiments demonstrate that the proposed approach significantly outperforms existing continual retrieval methods across multiple benchmark datasets, markedly alleviating catastrophic forgetting and enhancing long-term cross-modal retrieval performance.
π Abstract
Continual Text-to-Video Retrieval (CTVR) is a challenging multimodal continual learning setting, where models must incrementally learn new semantic categories while maintaining accurate text-video alignment for previously learned ones, thus making it particularly prone to catastrophic forgetting. A key challenge in CTVR is feature drift, which manifests in two forms: intra-modal feature drift caused by continual learning within each modality, and non-cooperative feature drift across modalities that leads to modality misalignment. To mitigate these issues, we propose StructAlign, a structured cross-modal alignment method for CTVR. First, StructAlign introduces a simplex Equiangular Tight Frame (ETF) geometry as a unified geometric prior to mitigate modality misalignment. Building upon this geometric prior, we design a cross-modal ETF alignment loss that aligns text and video features with category-level ETF prototypes, encouraging the learned representations to form an approximate simplex ETF geometry. In addition, to suppress intra-modal feature drift, we design a Cross-modal Relation Preserving loss, which leverages complementary modalities to preserve cross-modal similarity relations, providing stable relational supervision for feature updates. By jointly addressing non-cooperative feature drift across modalities and intra-modal feature drift, StructAlign effectively alleviates catastrophic forgetting in CTVR. Extensive experiments on benchmark datasets demonstrate that our method consistently outperforms state-of-the-art continual retrieval approaches.