Mimic In-Context Learning for Multimodal Tasks

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) suffer from poor generalization and high sensitivity to in-context demonstration (ICD) configurations in in-context learning (ICL). To address this, we propose MimIC—a lightweight, learnable offset method that models query-dependent dynamic offsets within each multi-head attention layer of Transformer-based LMMs. Its core innovations comprise four enhancements: (i) injecting offset vectors after attention outputs, (ii) employing head-wise independent parameterization, (iii) dynamically scaling offset magnitudes via query vectors, and (iv) introducing an inter-layer feature alignment loss. Evaluated on Idefics-9b and Idefics2-8b-base, MimIC achieves state-of-the-art performance on VQAv2, OK-VQA, and image captioning tasks—outperforming existing offset-based ICL methods while demonstrating superior stability and cross-task generalization. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Recently, In-context Learning (ICL) has become a significant inference paradigm in Large Multimodal Models (LMMs), utilizing a few in-context demonstrations (ICDs) to prompt LMMs for new tasks. However, the synergistic effects in multimodal data increase the sensitivity of ICL performance to the configurations of ICDs, stimulating the need for a more stable and general mapping function. Mathematically, in Transformer-based models, ICDs act as ``shift vectors'' added to the hidden states of query tokens. Inspired by this, we introduce Mimic In-Context Learning (MimIC) to learn stable and generalizable shift effects from ICDs. Specifically, compared with some previous shift vector-based methods, MimIC more strictly approximates the shift effects by integrating lightweight learnable modules into LMMs with four key enhancements: 1) inserting shift vectors after attention layers, 2) assigning a shift vector to each attention head, 3) making shift magnitude query-dependent, and 4) employing a layer-wise alignment loss. Extensive experiments on two LMMs (Idefics-9b and Idefics2-8b-base) across three multimodal tasks (VQAv2, OK-VQA, Captioning) demonstrate that MimIC outperforms existing shift vector-based methods. The code is available at https://github.com/Kamichanw/MimIC.
Problem

Research questions and friction points this paper is trying to address.

Enhances stability of in-context learning in multimodal models
Improves generalization of shift effects from in-context demonstrations
Optimizes shift vector configurations for better task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

MimIC integrates lightweight learnable modules
MimIC assigns shift vectors per attention head
MimIC uses layer-wise alignment loss
🔎 Similar Papers
No similar papers found.
Yuchu Jiang
Yuchu Jiang
Southeast University
Large Language ModelsComputer Vision
Jiale Fu
Jiale Fu
Southeast University
speculative decodingLLM reasoning
C
Chenduo Hao
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, (Southeast University), Ministry of Education
Xinting Hu
Xinting Hu
Max Planck Institute for Informatics
Multimodal ReasoningContinual LearningSemi-Supervised Learning
Yingzhe Peng
Yingzhe Peng
Southeast University
LLMNLPMultimodal
Xin Geng
Xin Geng
School of Computer Science and Engineering, Southeast University
Artificial IntelligencePattern RecognitionMachine Learning
X
Xu Yang
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, (Southeast University), Ministry of Education