Quantifying Cross-Modality Memorization in Vision-Language Models

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the mechanisms and characteristics of cross-modal memory in vision-language models, focusing on factual recall capability in one modality after training solely in the other—and the resulting transfer gap. We formalize “cross-modal memory” and empirically demonstrate its pronounced asymmetry: vision models exhibit poor recall of language-trained knowledge, and vice versa. Using a synthetic persona dataset and controlled training-evaluation decoupling, we design a cross-modal knowledge distillation evaluation framework and propose a quantitative memory score. This asymmetry proves robust across strong baselines, machine unlearning, and multi-hop reasoning settings. Furthermore, we introduce lightweight mitigation strategies that consistently narrow the cross-modal performance gap by 12–28% across diverse configurations, confirming that cross-modal knowledge is transferable yet fundamentally constrained by an intrinsic recall gap.

Technology Category

Application Category

📝 Abstract
Understanding what and how neural networks memorize during training is crucial, both from the perspective of unintentional memorization of potentially sensitive information and from the standpoint of effective knowledge acquisition for real-world, knowledge-intensive tasks. While previous studies primarily investigate memorization within a single modality, such as text memorization in large language models or image memorization in diffusion models, unified multimodal models are becoming increasingly prevalent in practical applications. In this work, we focus on the unique characteristics of cross-modality memorization and conduct a systematic study centered on vision-language models. To facilitate controlled experiments, we first introduce a synthetic persona dataset comprising diverse synthetic person images and textual descriptions. We quantify factual knowledge memorization and cross-modal transferability by training models on a single modality and evaluating their performance in the other. Our results reveal that facts learned in one modality transfer to the other, but a significant gap exists between recalling information in the source and target modalities. Furthermore, we observe that this gap exists across various scenarios, including more capable models, machine unlearning, and the multi-hop case. At the end, we propose a baseline method to mitigate this challenge. We hope our study can inspire future research on developing more robust multimodal learning techniques to enhance cross-modal transferability.
Problem

Research questions and friction points this paper is trying to address.

Quantify cross-modality memorization in vision-language models
Study transferability of learned facts between modalities
Propose method to enhance cross-modal transferability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces synthetic persona dataset for controlled experiments
Quantifies cross-modal memorization and transferability
Proposes baseline method to enhance transferability
🔎 Similar Papers
No similar papers found.