🤖 AI Summary
This work addresses the high computational cost of full fine-tuning multimodal large language models for evaluating chart-understanding datasets, which hinders efficient dataset iteration. To overcome this limitation, the paper introduces, for the first time, a greedy subset selection method based on a maximum entropy gain strategy to construct a highly diverse subset from a large-scale training set. This approach efficiently approximates the performance gains achievable through full fine-tuning while significantly accelerating the dataset evaluation pipeline. Extensive experiments demonstrate that the proposed method consistently outperforms existing baselines across various model architectures and dataset scales, confirming its effectiveness and strong generalization capability.
📝 Abstract
Recent works focus on synthesizing Chart Understanding (ChartU) training sets to inject advanced chart knowledge into Multimodal Large Language Models (MLLMs), where the sufficiency of the knowledge is typically verified by quantifying capability gains via the fine-tune-then-evaluate paradigm. However, full-set fine-tuning MLLMs to assess such gains incurs significant time costs, hindering the iterative refinement cycles of the ChartU dataset. Reviewing the ChartU dataset synthesis and data selection domains, we find that subsets can potentially probe the MLLMs'capability gains from full-set fine-tuning. Given that data diversity is vital for boosting MLLMs'performance and entropy reflects this feature, we propose EXaMCaP, which uses entropy gain maximization to select a subset. To obtain a high-diversity subset, EXaMCaP chooses the maximum-entropy subset from the large ChartU dataset. As enumerating all possible subsets is impractical, EXaMCaP iteratively selects samples to maximize the gain in set entropy relative to the current set, approximating the maximum-entropy subset of the full dataset. Experiments show that EXaMCaP outperforms baselines in probing the capability gains of the ChartU training set, along with its strong effectiveness across diverse subset sizes and compatibility with various MLLM architectures.