Coling-UniA at SciVQA 2025: Few-Shot Example Retrieval and Confidence-Informed Ensembling for Multimodal Large Language Models

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency of few-shot example retrieval and the limited generalization capability of multimodal large language models (MLLMs) in scientific visual question answering (SciVQA), this paper proposes an adaptive ensemble framework. First, it dynamically retrieves the most semantically relevant few-shot examples based on question semantics and image modality, while simultaneously selecting the optimal MLLM and prompt template. Second, it introduces a confidence-weighted fusion strategy to collaboratively refine answers across multiple models. For fine-grained evaluation, the method integrates ROUGE and BERTScore metrics. On the SciVQA 2025 blind test set, it achieves a mean F1 score of 85.12, ranking third and significantly outperforming baseline models. The implementation is publicly available, establishing a reproducible and extensible ensemble paradigm for few-shot scientific VQA.

Technology Category

Application Category

📝 Abstract
This paper describes our system for the SciVQA 2025 Shared Task on Scientific Visual Question Answering. Our system employs an ensemble of two Multimodal Large Language Models and various few-shot example retrieval strategies. The model and few-shot setting are selected based on the figure and question type. We also select answers based on the models' confidence levels. On the blind test data, our system ranks third out of seven with an average F1 score of 85.12 across ROUGE-1, ROUGE-L, and BERTS. Our code is publicly available.
Problem

Research questions and friction points this paper is trying to address.

Improving few-shot example retrieval for multimodal models
Enhancing answer selection via confidence-informed ensembling
Optimizing model choice based on figure and question types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble of two Multimodal Large Language Models
Few-shot example retrieval strategies
Confidence-based answer selection
🔎 Similar Papers
No similar papers found.