🤖 AI Summary
This work addresses the limitations of existing static curriculum learning strategies in adapting to the dynamic training demands of large language model fine-tuning. To overcome this, the authors propose the EDCO framework, which introduces a dynamic curriculum orchestration mechanism based on inference entropy to adaptively reorder training samples and enhance learning efficiency. EDCO incorporates an efficient entropy estimator that approximates full-sequence entropy using prefix tokens, achieving high estimation accuracy while reducing computational overhead by 83.5%. The framework further unifies supervised and reinforcement learning within a single trainer architecture. Experimental results on Qwen3-4B and Llama3.2-3B demonstrate that EDCO consistently outperforms conventional curriculum strategies, delivering significant improvements in both performance and efficiency across diverse domains including communications, medicine, and law.
📝 Abstract
Domain-specific large language models (LLMs), typically developed by fine-tuning a pre-trained general-purpose LLM on specialized datasets, represent a significant advancement in applied AI. A common strategy in LLM fine-tuning is curriculum learning, which pre-orders training samples based on metrics like difficulty to improve learning efficiency compared to a random sampling strategy. However, most existing methods for LLM fine-tuning rely on a static curriculum, designed prior to training, which lacks adaptability to the model's evolving needs during fine-tuning. To address this, we propose EDCO, a novel framework based on two key concepts: inference entropy and dynamic curriculum orchestration. Inspired by recent findings that maintaining high answer entropy benefits long-term reasoning gains, EDCO prioritizes samples with high inference entropy in a continuously adapted curriculum. EDCO integrates three core components: an efficient entropy estimator that uses prefix tokens to approximate full-sequence entropy, an entropy-based curriculum generator that selects data points with the highest inference entropy, and an LLM trainer that optimizes the model on the selected curriculum. Comprehensive experiments in communication, medicine and law domains, EDCO outperforms traditional curriculum strategies for fine-tuning Qwen3-4B and Llama3.2-3B models under supervised and reinforcement learning settings. Furthermore, the proposed efficient entropy estimation reduces computational time by 83.5% while maintaining high accuracy.