🤖 AI Summary
Visual language models (VLMs) suffer from overfitting and imbalanced performance during fine-tuning due to real-world data biases, annotation noise, and class distribution skew. Method: This paper proposes a controllable synthetic data generation framework that constructs unbiased, attribute-balanced synthetic scene datasets by decoupling sampling across spatial position, color, shape, and size, coupled with precise programmatic annotation—thereby eliminating distributional bias and human labeling errors while enabling cross-domain transfer. Contribution/Results: Experiments on real-world benchmarks—including COCO—demonstrate that VLMs fine-tuned on our synthetic data significantly outperform those trained via conventional fine-tuning on absolute positional reasoning tasks. The proposed approach yields higher overall accuracy and more uniform performance across spatial relations, validating the effectiveness of synthetic-data-driven generalization for spatial reasoning capabilities.
📝 Abstract
Fine-tuning Vision-Language Models (VLMs) is a common strategy to improve performance following an ad-hoc data collection and annotation of real-world scenes. However, this process is often prone to biases, errors, and distribution imbalance, resulting in overfitting and imbalanced performance. Although a few studies have tried to address this problem by generating synthetic data, they lacked control over distribution bias and annotation quality. To address these challenges, we redesign the fine-tuning process in two ways. First, we control the generation of data and its annotations, ensuring it is free from bias, distribution imbalance, and annotation errors. We automatically construct the dataset by comprehensively sampling objects'attributes, including color, shape, size, and position within the scene. Secondly, using this annotated dataset, we fine-tune state-of-the-art VLMs and assess performance transferability to real-world data on the absolute position task. We conduct exhaustive evaluations on both synthetic and real-world benchmarks. Our experiments reveal two key findings: 1) fine-tuning on balanced synthetic data yields uniform performance across the visual scene and mitigates common biases; and 2) fine-tuning on synthetic stimuli significantly improves performance on real-world data (COCO), outperforming models fine-tuned in the matched setting.