Negate or Embrace: On How Misalignment Shapes Multimodal Representation Learning

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world image–text data commonly suffer from inter-modal semantic misalignment, arising from selection bias and perturbation bias. Method: We propose the first formal definition of a misalignment generation mechanism and establish the “bias-invariant semantic subset” theoretical framework, unifying the explanation of how multimodal contrastive learning (MMCL) implicitly learns semantic representations robust to both biases. Our approach integrates latent-variable modeling, theoretical analysis of MMCL, and empirical validation on synthetic (CC3M) and real-world (LAION) datasets. Contribution/Results: We theoretically prove that MMCL representations exactly capture this bias-invariant semantic subset. Empirically, we demonstrate that misalignment type and severity significantly degrade downstream performance, yielding quantifiable, actionable guidelines for data curation, model architecture design, and evaluation protocols.

Technology Category

Application Category

📝 Abstract
Multimodal representation learning, exemplified by multimodal contrastive learning (MMCL) using image-text pairs, aims to learn powerful representations by aligning cues across modalities. This approach relies on the core assumption that the exemplar image-text pairs constitute two representations of an identical concept. However, recent research has revealed that real-world datasets often exhibit misalignment. There are two distinct viewpoints on how to address this issue: one suggests mitigating the misalignment, and the other leveraging it. We seek here to reconcile these seemingly opposing perspectives, and to provide a practical guide for practitioners. Using latent variable models we thus formalize misalignment by introducing two specific mechanisms: selection bias, where some semantic variables are missing, and perturbation bias, where semantic variables are distorted -- both affecting latent variables shared across modalities. Our theoretical analysis demonstrates that, under mild assumptions, the representations learned by MMCL capture exactly the information related to the subset of the semantic variables invariant to selection and perturbation biases. This provides a unified perspective for understanding misalignment. Based on this, we further offer actionable insights into how misalignment should inform the design of real-world ML systems. We validate our theoretical findings through extensive empirical studies on both synthetic data and real image-text datasets, shedding light on the nuanced impact of misalignment on multimodal representation learning.
Problem

Research questions and friction points this paper is trying to address.

Addressing misalignment in multimodal representation learning
Formalizing misalignment via selection and perturbation biases
Providing practical insights for real-world ML system design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent variable models formalize misalignment mechanisms
MMCL captures invariant semantic variables under biases
Theoretical insights guide real-world ML system design
🔎 Similar Papers
No similar papers found.
Y
Yichao Cai
Australian Institute for Machine Learning, The University of Adelaide
Yuhang Liu
Yuhang Liu
The University of Adelaide
Representation LearningLLMsLatent Variable ModelsResponsible AI
Erdun Gao
Erdun Gao
University of Adelaide
Causal Inference
T
Tianjiao Jiang
Australian Institute for Machine Learning, The University of Adelaide
Z
Zhen Zhang
Australian Institute for Machine Learning, The University of Adelaide
A
A. Hengel
Australian Institute for Machine Learning, The University of Adelaide
J
J. Q. Shi
Australian Institute for Machine Learning, The University of Adelaide