UniICL: Systematizing Unified Multimodal In-context Learning through a Capability-Oriented Taxonomy

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and task dependency of multimodal in-context learning, which is highly sensitive to example selection, formatting, and cross-modal interference. To this end, the authors propose the first capability-oriented six-level taxonomy that systematically characterizes the functional roles of in-context examples. They further introduce UniICL-760K, a large-scale multimodal corpus, along with UniICL-Bench, a standardized evaluation benchmark. Additionally, a lightweight, plug-and-play context-adaptive prototype modulation module is designed to effectively mitigate cross-modal interference. Experimental results demonstrate that the proposed approach achieves robust few-shot adaptation on UniICL-Bench and outperforms larger multimodal large language model baselines on most comprehension tasks.

Technology Category

Application Category

📝 Abstract
In-context Learning enables training-free adaptation via demonstrations but remains highly sensitive to example selection and formatting. In unified multimodal models spanning understanding and generation, this sensitivity is exacerbated by cross-modal interference and varying cognitive demands. Consequently, In-context Learning efficacy is often non-monotonic and highly task-dependent. To diagnose these behaviors, we introduce a six-level capability-oriented taxonomy that categorizes the functional role of demonstrations from basic perception to high-order discernment. Guided by this cognitive framework, we construct UniICL-760K, a large-scale corpus featuring curated 8-shot In-context Learning episodes across 15 subtasks, alongside UniICL-Bench for rigorous, controlled evaluation. As an architectural intervention to stabilize few-shot adaptation, we propose the Context-Adaptive Prototype Modulator, a lightweight, plug-and-play module. Evaluations on UniICL-Bench show that our approach yields highly competitive unified results, outperforming larger-parameter multimodal large language model baselines on most understanding In-context Learning tasks. Data and code will be available soon at https://github.com/xuyicheng-zju/UniICL.
Problem

Research questions and friction points this paper is trying to address.

In-context Learning
multimodal models
cross-modal interference
task-dependency
few-shot adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-context Learning
Multimodal Learning
Capability-Oriented Taxonomy
Context-Adaptive Prototype Modulator
Unified Evaluation Benchmark
🔎 Similar Papers
No similar papers found.