When Gradient Optimization Is Not Enough: $\dagger$ Dispersive and Anchoring Geometric Regularizer for Multimodal Learning

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the geometric deficiencies in multimodal representation learning—such as intra-modal collapse and sample-level cross-modal inconsistency—that commonly arise from sole reliance on gradient-based optimization, thereby compromising both unimodal robustness and fusion efficacy. To tackle this, the study introduces a lightweight, plug-and-play geometry-aware regularization framework that explicitly treats representation geometry as a controllable dimension. The approach enhances representational diversity through intra-modal dispersion constraints while mitigating sample-level drift via cross-modal anchoring constraints, all without altering the underlying model architecture. Extensive experiments across multiple multimodal benchmarks demonstrate consistent improvements in both unimodal and multimodal performance, effectively alleviating the modality trade-off problem and validating the efficacy of geometric regularization in shaping well-structured representations.

Technology Category

Application Category

📝 Abstract
Multimodal learning aims to integrate complementary information from heterogeneous modalities, yet strong optimization alone does not guaranty well-structured representations. Even under carefully balanced training schemes, multimodal models often exhibit geometric pathologies, including intra-modal representation collapse and sample-level cross-modal inconsistency, which degrade both unimodal robustness and multimodal fusion. We identify representation geometry as a missing control axis in multimodal learning and propose \regName, a lightweight geometry-aware regularization framework. \regName enforces two complementary constraints on intermediate embeddings: an intra-modal dispersive regularization that promotes representation diversity, and an inter-modal anchoring regularization that bounds sample-level cross-modal drift without rigid alignment. The proposed regularizer is plug-and-play, requires no architectural modifications, and is compatible with various training paradigms. Extensive experiments across multiple multimodal benchmarks demonstrate consistent improvements in both multimodal and unimodal performance, showing that explicitly regulating representation geometry effectively mitigates modality trade-offs.
Problem

Research questions and friction points this paper is trying to address.

multimodal learning
representation geometry
intra-modal collapse
cross-modal inconsistency
geometric pathology
Innovation

Methods, ideas, or system contributions that make the work stand out.

geometric regularization
multimodal learning
representation diversity
cross-modal consistency
plug-and-play regularizer
🔎 Similar Papers
No similar papers found.
Z
Zixuan Xia
Department of Informatics, University of Bern, Bern, Switzerland; School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
H
Hao Wang
Department of Informatics, University of Bern, Bern, Switzerland; School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
P
Pengcheng Weng
Department of Informatics, University of Bern, Bern, Switzerland; School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
Y
Yanyu Qian
College of Computing and Data Science, Nanyang Technological University, Singapore; School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
Yangxin Xu
Yangxin Xu
The Chinese University of Hong Kong
W
William Dan
Department of Informatics, University of Bern, Bern, Switzerland; School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
Fei Wang
Fei Wang
Xi'an Jiaotong University
computer visionartificial intelligence