EXaMCaP: Subset Selection with Entropy Gain Maximization for Probing Capability Gains of Large Chart Understanding Training Sets

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of full fine-tuning multimodal large language models for evaluating chart-understanding datasets, which hinders efficient dataset iteration. To overcome this limitation, the paper introduces, for the first time, a greedy subset selection method based on a maximum entropy gain strategy to construct a highly diverse subset from a large-scale training set. This approach efficiently approximates the performance gains achievable through full fine-tuning while significantly accelerating the dataset evaluation pipeline. Extensive experiments demonstrate that the proposed method consistently outperforms existing baselines across various model architectures and dataset scales, confirming its effectiveness and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Recent works focus on synthesizing Chart Understanding (ChartU) training sets to inject advanced chart knowledge into Multimodal Large Language Models (MLLMs), where the sufficiency of the knowledge is typically verified by quantifying capability gains via the fine-tune-then-evaluate paradigm. However, full-set fine-tuning MLLMs to assess such gains incurs significant time costs, hindering the iterative refinement cycles of the ChartU dataset. Reviewing the ChartU dataset synthesis and data selection domains, we find that subsets can potentially probe the MLLMs'capability gains from full-set fine-tuning. Given that data diversity is vital for boosting MLLMs'performance and entropy reflects this feature, we propose EXaMCaP, which uses entropy gain maximization to select a subset. To obtain a high-diversity subset, EXaMCaP chooses the maximum-entropy subset from the large ChartU dataset. As enumerating all possible subsets is impractical, EXaMCaP iteratively selects samples to maximize the gain in set entropy relative to the current set, approximating the maximum-entropy subset of the full dataset. Experiments show that EXaMCaP outperforms baselines in probing the capability gains of the ChartU training set, along with its strong effectiveness across diverse subset sizes and compatibility with various MLLM architectures.
Problem

Research questions and friction points this paper is trying to address.

Chart Understanding
Multimodal Large Language Models
Subset Selection
Capability Gain Probing
Training Set Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

entropy gain maximization
subset selection
chart understanding
multimodal large language models
data diversity
J
Jiapeng Liu
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyberspace Security, University of Chinese Academy of Sciences, Beijing, China
Liang Li
Liang Li
Institue of Computing Technology, CAS
Computer VisionImage UnderstandingMultimedia Content Analysis
Bing Li
Bing Li
Professor of National Laboratory of Pattern Recognition, Institute of Automation, Chinese
Video AnalysisColor ConstancyWeb MiningMultimedia
Peng Fu
Peng Fu
Institute of Information Engineering, Chinese Academy of Sciences
Natural Language Processing
X
Xiyan Gao
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
C
Chengyang Fang
School of Computer and Artificial Intelligence, Jiangxi University of Finance and Economics, Jiangxi, China
Xiaoshuai Hao
Xiaoshuai Hao
Beijing Academy of Artificial Intelligence,BAAI
vision and language
Can Ma
Can Ma
Unknown affiliation