🤖 AI Summary
This study investigates the effectiveness of multimodal in-context learning (ICL) in large multimodal models (LMMs) for image captioning, specifically addressing how to optimally configure image-caption exemplars (ICEs) to improve performance.
Method: We propose a dual-perspective framework: an *external analysis* systematically evaluates ICE quantity, image retrieval strategies, and caption assignment schemes; an *internal attention mechanism* introduces a novel attention metric, complemented by visualization and ablation studies to characterize model reasoning behavior and assess representational compression feasibility.
Contributions/Results: (1) First quantitative characterization of how distinct ICE configurations modulate attention responses; (2) Identification of attention-level mechanisms underlying performance disparities among LMMs sharing the same architecture; (3) Derivation of transferable, generalizable ICE configuration principles; (4) Empirical validation of attention-guided optimization for ICL. Collectively, these findings provide both theoretical foundations and practical guidelines for efficient, interpretable multimodal ICL.
📝 Abstract
The evolution of large models has witnessed the emergence of In-Context Learning (ICL) capabilities. In Natural Language Processing (NLP), numerous studies have demonstrated the effectiveness of ICL. Inspired by the success of Large Language Models (LLMs), researchers have developed Large Multimodal Models (LMMs) with ICL capabilities. However, explorations of demonstration configuration for multimodal ICL remain preliminary. Additionally, the controllability of In-Context Examples (ICEs) provides an efficient and cost-effective means to observe and analyze the inference characteristics of LMMs under varying inputs. This paper conducts a comprehensive external and internal investigation of multimodal in-context learning on the image captioning task. Externally, we explore demonstration configuration strategies through three dimensions: shot number, image retrieval, and caption assignment. We employ multiple metrics to systematically and thoroughly evaluate and summarize key findings. Internally, we analyze typical LMM attention characteristics and develop attention-based metrics to quantify model behaviors. We also conduct auxiliary experiments to explore the feasibility of attention-driven model acceleration and compression. We further compare performance variations between LMMs with identical model design and pretraining strategies and explain the differences from the angles of pre-training data features. Our study reveals both how ICEs configuration strategies impact model performance through external experiments and characteristic typical patterns through internal inspection, providing dual perspectives for understanding multimodal ICL in LMMs. Our method of combining external and internal analysis to investigate large models, along with our newly proposed metrics, can be applied to broader research areas.