🤖 AI Summary
Addressing the dual challenges of overfitting during LLM fine-tuning caused by data sparsity in biomedicine and the practical difficulty of reinforcement learning due to ill-defined reward signals, this paper proposes a novel two-level weighted supervised fine-tuning framework that requires no external reward. It dynamically adjusts loss weights at both token-level and sample-level to enhance hard-example learning and mitigate overfitting. Additionally, it integrates prediction-probability-scaled loss with a minimum confidence constraint to improve training stability and generalization. Experiments demonstrate that our method outperforms GeneAgent across multiple biomedical reasoning tasks, effectively capturing implicit knowledge missed by conventional models. It further enables complex downstream applications—including gene–gene interaction inference and single-cell perturbation response prediction—validating its robustness and utility in real-world biomedical AI scenarios.
📝 Abstract
Effective post-training is essential to align Large Language Models (LLMs) with specialized biomedical knowledge to accelerate life science research. However, current approaches face significant limitations. First, biomedical reasoning involves intricate mechanisms often represented by sparse textual data. Standard Supervised Fine-Tuning (SFT) tends to overfit to surface-level instruction patterns without effectively internalizing this fragmented scientific knowledge. Second, Reinforcement Learning (RL) is impractical for this domain, as defining meaningful rewards often necessitates prohibitive experimental validation (e.g., wet-lab verification of drug responses), rendering real-time feedback unfeasible. We propose Balanced Fine-Tuning (BFT), an efficient post-training method designed to learn complex reasoning from sparse data without external reward signals. BFT operates through a two-layer weighting mechanism: 1. At the token level, it scales loss via prediction probabilities to stabilize gradients and prevent overfitting; 2. At the sample level, it uses "minimum group confidence" to adaptively enhance the learning of hard samples. Experiments demonstrate that BFT significantly outperforms SFT. In medical tasks, it enables LLMs to acquire knowledge that SFT misses. In biological tasks, BFT-based LLMs surpass GeneAgent (an accurate agent for biology analysis) in biological process reasoning. Moreover, the text embeddings generated by BFT can be directly applied to downstream tasks, such as gene interaction and single-cell perturbation response prediction. These results indicate that BFT facilitates broad applications of LLMs in biomedical research.