Generative Modeling of Class Probability for Multi-Modal Representation Learning

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multimodal representation learning, inherent modality disparities cause misalignment in latent spaces, severely hindering cross-modal joint reasoning and out-of-distribution (OOD) generalization. To address this, we propose Class-Anchor Alignment (CAA), a novel paradigm that leverages class probability distributions as a unified semantic medium for inter-modal alignment. We further introduce CALM—a Class-anchor-driven probabilistic generative alignment framework—that jointly integrates class-anchor prompting encoding, contrastive alignment optimization, and a cross-modal probabilistic variational autoencoder (PVAE) to explicitly model alignment uncertainty. Evaluated on four benchmark datasets, CALM consistently surpasses state-of-the-art methods; under OOD settings, it achieves a relative performance gain of 12.7%, significantly improving model robustness and generalization capacity.

Technology Category

Application Category

📝 Abstract
Multi-modal understanding plays a crucial role in artificial intelligence by enabling models to jointly interpret inputs from different modalities. However, conventional approaches such as contrastive learning often struggle with modality discrepancies, leading to potential misalignments. In this paper, we propose a novel class anchor alignment approach that leverages class probability distributions for multi-modal representation learning. Our method, Class-anchor-ALigned generative Modeling (CALM), encodes class anchors as prompts to generate and align class probability distributions for each modality, enabling more effective alignment. Furthermore, we introduce a cross-modal probabilistic variational autoencoder to model uncertainty in the alignment, enhancing the ability to capture deeper relationships between modalities and data variations. Extensive experiments on four benchmark datasets demonstrate that our approach significantly outperforms state-of-the-art methods, especially in out-of-domain evaluations. This highlights its superior generalization capabilities in multi-modal representation learning.
Problem

Research questions and friction points this paper is trying to address.

Aligning multi-modal representations to reduce discrepancies
Modeling class probability distributions for effective alignment
Enhancing cross-modal understanding with uncertainty-aware learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Class-anchor-aligned generative modeling for multi-modal learning
Cross-modal probabilistic variational autoencoder for uncertainty modeling
Class probability distributions alignment via anchor prompts
🔎 Similar Papers
No similar papers found.
J
Jungkyoo Shin
Department of AI, Chung-Ang University
B
Bumsoo Kim
School of CSE, Chung-Ang University
Eunwoo Kim
Eunwoo Kim
Chung-Ang University
Machine LearningComputer VisionRobotics