Model alignment using inter-modal bridges

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of cross-modal representation alignment—specifically between text/image and biological/artificial neural activity—within foundational models. We propose a semi-supervised alignment framework based on conditional flow matching, which obviates the need for large-scale paired data. Our approach introduces, for the first time, an “inter-space bridging cost” formulation grounded in optimal transport theory, and integrates two complementary, label-efficient pathways: memory-augmented alignment and implicit space bridging. This enables low-supervision alignment across heterogeneous latent spaces. Evaluated on MNIST, ImageNet, and the Majaj2015 neural dataset, our method achieves downstream performance—on object recognition and image generation—comparable to fully supervised end-to-end training, using less than 20% labeled data. It substantially reduces reliance on domain-specific priors and paired samples, establishing a novel paradigm for cross-modal foundation model reuse.

Technology Category

Application Category

📝 Abstract
Foundation models have demonstrated remarkable performance across modalities such as language and vision. However, model reuse across distinct modalities (e.g., text and vision) remains limited due to the difficulty of aligning internal representations. Existing methods require extensive paired training data or are constrained to specific domains. We introduce a semi-supervised approach for model alignment via conditional flow matching. The conditional flow between latent spaces of different modalities (e.g., text-to-image or biological-to-artificial neuronal activity) can be learned in two settings: ($1$) solving a (balanced or unbalanced) optimal transport problem with an inter-space bridge cost, and ($2$) performing memory-efficient alignment using labelled exemplars. Despite being constrained by the original models' capacity, our method--under both settings--matches downstream task performance of end-to-end trained models on object recognition and image generation tasks across MNIST, ImageNet, and cite{majaj2015simple} datasets, particularly when labelled training data is scarce ($<20%$). Our method provides a data-efficient solution for inter-modal model alignment with minimal supervision.
Problem

Research questions and friction points this paper is trying to address.

Aligning internal representations across distinct modalities like text and vision
Reducing reliance on extensive paired training data for model alignment
Achieving efficient inter-modal alignment with minimal supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-supervised model alignment via conditional flow
Optimal transport with inter-space bridge cost
Memory-efficient alignment using labeled exemplars
🔎 Similar Papers
No similar papers found.