🤖 AI Summary
This work investigates whether gradient-free evolutionary strategies (ES) and gradient-based GRPO converge to geometrically similar solutions in the post-training of large language models. Despite achieving comparable task performance, we find that their parameter update directions are nearly orthogonal, with ES inducing substantially larger parameter changes and greater out-of-distribution behavioral drift. Through comprehensive analyses—including optimization trajectory inspection, KL divergence measurements, linear mode connectivity tests, and theoretical modeling—we demonstrate for the first time that, although no loss barrier separates their solutions, the two methods explore fundamentally distinct regions of the solution space. We further propose a unified theoretical framework to explain the geometric divergence between these optimization paradigms.
📝 Abstract
Evolution Strategies (ES) have emerged as a scalable gradient-free alternative to reinforcement learning based LLM fine-tuning, but it remains unclear whether comparable task performance implies comparable solutions in parameter space. We compare ES and Group Relative Policy Optimization (GRPO) across four tasks in both single-task and sequential continual-learning settings. ES matches or exceeds GRPO in single-task accuracy and remains competitive sequentially when its iteration budget is controlled. Despite this similarity in task performance, the two methods produce markedly different model updates: ES makes much larger changes and induces broader off-task KL drift, whereas GRPO makes smaller, more localized updates. Strikingly, the ES and GRPO solutions are linearly connected with no loss barrier, even though their update directions are nearly orthogonal. We develop an analytical theory of ES that explains all these phenomena within a unified framework, showing how ES can accumulate large off-task movement on weakly informative directions while still making enough progress on the task to match gradient-based RL in downstream accuracy. These results show that gradient-free and gradient-based fine-tuning can reach similarly accurate yet geometrically distinct solutions, with important consequences for forgetting and knowledge preservation. The source code is publicly available: https://github.com/Bhoy1/ESvsGRPO.