FewMMBench: A Benchmark for Multimodal Few-Shot Learning

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of multimodal large language models (MLLMs) in few-shot learning scenarios, particularly concerning in-context learning and chain-of-thought prompting. To this end, we introduce FewMMBench—the first comprehensive benchmark dedicated to multimodal few-shot learning—encompassing diverse tasks such as attribute recognition and temporal reasoning. We conduct a systematic evaluation of 26 open-source models across six model families under zero-shot, few-shot, and chain-of-thought enhanced settings. Our findings reveal that instruction-tuned models, despite strong zero-shot capabilities, exhibit limited sensitivity—or even performance degradation—when provided with examples or chain-of-thought prompts. Moreover, gains from retrieval-based example selection and increased context length are marginal, underscoring the value of FewMMBench as a diagnostic tool for multimodal few-shot learning.

Technology Category

Application Category

📝 Abstract
As multimodal large language models (MLLMs) advance in handling interleaved image-text data, assessing their few-shot learning capabilities remains an open challenge. In this paper, we introduce FewMMBench, a comprehensive benchmark designed to evaluate MLLMs under few-shot conditions, with a focus on In-Context Learning (ICL) and Chain-of-Thought (CoT) prompting. Covering a diverse suite of multimodal understanding tasks, from attribute recognition to temporal reasoning, FewMMBench enables systematic analysis across task types, model families, and prompting strategies. We evaluate 26 open-weight MLLMs from six model families across zero-shot, few-shot, and CoT-augmented few-shot settings. Our findings reveal that instruction-tuned models exhibit strong zero-shot performance but benefit minimally, or even regress, with additional demonstrations or CoT reasoning. Retrieval-based demonstrations and increased context size also yield limited gains. These results highlight FewMMBench as a rigorous testbed for diagnosing and advancing few-shot capabilities in multimodal LLMs. The data is available at: https://huggingface.co/datasets/mustafaa/FewMMBench
Problem

Research questions and friction points this paper is trying to address.

multimodal few-shot learning
multimodal large language models
In-Context Learning
Chain-of-Thought prompting
benchmark evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-Shot Learning
Multimodal Large Language Models
In-Context Learning
Chain-of-Thought Prompting
Benchmark
🔎 Similar Papers
No similar papers found.