🤖 AI Summary
This work addresses the challenge of domain-adaptive fine-tuning in large language models—specifically, how to acquire domain expertise without suffering catastrophic forgetting or compromising general reasoning capabilities. The authors propose Orthogonal Gradient Selection (OGS), a novel method that introduces the geometric principle of gradient orthogonality into the data selection phase. By leveraging a lightweight Navigator model trained via reinforcement learning, OGS dynamically selects training samples whose gradients are orthogonal to those of a general-knowledge anchor, without modifying the optimizer or requiring online projection. Experiments across medical, legal, and financial domains demonstrate that OGS significantly improves task performance and training efficiency while preserving—and sometimes even enhancing—the model’s general reasoning abilities on benchmarks such as GSM8K.
📝 Abstract
Fine-tuning large language models (LLMs) for specialized domains often necessitates a trade-off between acquiring domain expertise and retaining general reasoning capabilities, a phenomenon known as catastrophic forgetting. Existing remedies face a dichotomy: gradient surgery methods offer geometric safety but incur prohibitive computational costs via online projections, while efficient data selection approaches reduce overhead but remain blind to conflict-inducing gradient directions. In this paper, we propose Orthogonal Gradient Selection (OGS), a data-centric method that harmonizes domain performance, general capability retention, and training efficiency. OGS shifts the geometric insights of gradient projection from the optimizer to the data selection stage by treating data selection as a constrained decision-making process. By leveraging a lightweight Navigator model and reinforcement learning techniques, OGS dynamically identifies training samples whose gradients are orthogonal to a general-knowledge anchor. This approach ensures naturally safe updates for target models without modifying the optimizer or incurring runtime projection costs. Experiments across medical, legal, and financial domains demonstrate that OGS achieves excellent results, significantly improving domain performance and training efficiency while maintaining or even enhancing performance on general tasks such as GSM8K.