🤖 AI Summary
Fine-tuning large language models (LLMs) for medical reasoning is hindered by low-quality, redundant training data, leading to high computational costs and suboptimal performance.
Method: We propose a data selection framework for efficient fine-tuning, centered on the Difficulty–Influence Quadrant (DIQ) strategy. DIQ jointly leverages gradient-based influence analysis and multi-dimensional sample difficulty estimation, augmented by a dual evaluation mechanism integrating human physician judgments and large-model assessments. This prioritizes high-difficulty, high-influence, high-quality samples.
Contribution/Results: On mainstream medical reasoning benchmarks, our method achieves full fine-tuning performance using only 1% of curated data and consistently surpasses baseline models with just 10%. It significantly reduces computational overhead while improving clinical reasoning accuracy. The approach establishes an interpretable, reproducible paradigm for few-shot optimization of medical LMs.
📝 Abstract
Supervised Fine-Tuning (SFT) plays a pivotal role in adapting Large Language Models (LLMs) to specialized domains such as medical reasoning. However, existing SFT practices often rely on unfiltered datasets that contain redundant and low-quality samples, leading to substantial computational costs and suboptimal performance. Although existing methods attempt to alleviate this problem by selecting data based on sample difficulty, defined by knowledge and reasoning complexity, they overlook each sample's optimization utility reflected in its gradient. Interestingly, we find that gradient-based influence alone favors easy-to-optimize samples that cause large parameter shifts but lack deep reasoning chains, while difficulty alone selects noisy or overly complex cases that fail to guide stable optimization. Based on this observation, we propose a data selection strategy, Difficulty-Influence Quadrant (DIQ), which prioritizes samples in the high-difficulty-high-influence quadrant to balance complex clinical reasoning with substantial gradient influence, enabling efficient medical reasoning with minimal fine-tuning data. Furthermore, Human and LLM-as-a-judge evaluations show that DIQ-selected subsets demonstrate higher data quality and generate clinical reasoning that is more aligned with expert practices in differential diagnosis, safety check, and evidence citation, as DIQ emphasizes samples that foster expert-like reasoning patterns. Extensive experiments on medical reasoning benchmarks demonstrate that DIQ enables models fine-tuned on only 1% of selected data to match full-dataset performance, while using 10% consistently outperforms the baseline, highlighting the superiority of principled data selection over brute-force scaling. The code and data are available at https://github.com/mihara-bot/DIQ.