🤖 AI Summary
To address the low data selection efficiency and high computational cost in reinforcement learning fine-tuning of large language models (LLMs), this paper proposes a single-forward uncertainty estimation framework grounded in Vygotsky’s Zone of Proximal Development (ZPD) cognitive theory. It introduces the first ZPD-inspired approach for RL data filtering, adaptively defining ZPD boundaries via learnable uncertainty modeling. By replacing multi-sample evaluation with a single forward pass, the method achieves 185× computational speedup. Integrated with policy-optimization–guided data reweighting and lightweight confidence calibration, it significantly improves selection accuracy. Experiments demonstrate that the method attains full-data performance using only 10% of training samples, delivers up to 16× end-to-end training acceleration, and markedly enhances training stability and cross-task generalization.
📝 Abstract
Scaling RL for LLMs is computationally expensive, largely due to multi-sampling for policy optimization and evaluation, making efficient data selection crucial. Inspired by the Zone of Proximal Development (ZPD) theory, we hypothesize LLMs learn best from data within their potential comprehension zone. Addressing the limitation of conventional, computationally intensive multi-sampling methods for data assessment, we introduce UFO-RL. This novel framework uses a computationally efficient single-pass uncertainty estimation to identify informative data instances, achieving up to 185x faster data evaluation. UFO-RL leverages this metric to select data within the estimated ZPD for training. Experiments show that training with just 10% of data selected by UFO-RL yields performance comparable to or surpassing full-data training, reducing overall training time by up to 16x while enhancing stability and generalization. UFO-RL offers a practical and highly efficient strategy for scaling RL fine-tuning of LLMs by focusing learning on valuable data.