🤖 AI Summary
This work addresses the geometric deficiencies in multimodal representation learning—such as intra-modal collapse and sample-level cross-modal inconsistency—that commonly arise from sole reliance on gradient-based optimization, thereby compromising both unimodal robustness and fusion efficacy. To tackle this, the study introduces a lightweight, plug-and-play geometry-aware regularization framework that explicitly treats representation geometry as a controllable dimension. The approach enhances representational diversity through intra-modal dispersion constraints while mitigating sample-level drift via cross-modal anchoring constraints, all without altering the underlying model architecture. Extensive experiments across multiple multimodal benchmarks demonstrate consistent improvements in both unimodal and multimodal performance, effectively alleviating the modality trade-off problem and validating the efficacy of geometric regularization in shaping well-structured representations.
📝 Abstract
Multimodal learning aims to integrate complementary information from heterogeneous modalities, yet strong optimization alone does not guaranty well-structured representations. Even under carefully balanced training schemes, multimodal models often exhibit geometric pathologies, including intra-modal representation collapse and sample-level cross-modal inconsistency, which degrade both unimodal robustness and multimodal fusion. We identify representation geometry as a missing control axis in multimodal learning and propose \regName, a lightweight geometry-aware regularization framework. \regName enforces two complementary constraints on intermediate embeddings: an intra-modal dispersive regularization that promotes representation diversity, and an inter-modal anchoring regularization that bounds sample-level cross-modal drift without rigid alignment. The proposed regularizer is plug-and-play, requires no architectural modifications, and is compatible with various training paradigms. Extensive experiments across multiple multimodal benchmarks demonstrate consistent improvements in both multimodal and unimodal performance, showing that explicitly regulating representation geometry effectively mitigates modality trade-offs.