🤖 AI Summary
To address the high computational cost, excessive memory consumption, training instability, and hyperparameter sensitivity of Reinforcement Learning from Human Feedback (RLHF) in aligning large language models (LLMs), this paper proposes ESSA: an efficient, gradient-free alignment framework that synergistically integrates Evolution Strategies (ES) with Low-Rank Adaptation (LoRA). ESSA eliminates backpropagation and instead leverages massively parallel population-based optimization, enabling memory-efficient and stable post-training alignment of LLMs. LoRA drastically reduces the dimensionality of the parameter search space for ES, making evolutionary optimization scalable to LLMs for the first time. On mathematical reasoning benchmarks, ESSA achieves faster convergence and higher data efficiency than gradient-based methods such as GRPO, demonstrating its effectiveness, robustness, and scalability.
📝 Abstract
Large Language Models (LLMs) are increasingly relying on alignment techniques to ensure that their outputs match human preferences. Although reinforcement learning from human feedback (RLHF) is the dominant approach, it has high computational costs, memory requirements, and training instability, particularly when scaling to larger models. This paper introduces ESSA (Evolutionary Strategies for Scalable Alignment), a new framework that uses Evolutionary Strategies (ES) to efficiently align LLMs without the need for gradient computation. ES is well-suited for LLM alignment due to its favorable properties, such as high parallelizability, memory efficiency, robustness to sparse rewards, and fewer data samples required for convergence, especially when starting from a strong pre-trained policy. Moreover, ES eliminates the need for extensive hyperparameter tuning, making the alignment process simpler and more stable. Although ES excels in low-dimensional optimization, it poses a challenge when applied to high-dimensional LLMs. To address this challenge, we propose a parameter-efficient architectural modification that reduces the dimensionality of optimization through low-rank adaptation. We evaluated our approach on mathematical reasoning tasks with verifiable accuracy-based metrics, demonstrating that ESSA converges faster and is more data efficient than gradient-based methods like Group Relative Policy Optimization (GRPO). Our findings establish ES as a promising and scalable alternative to gradient-based alignment, paving the way for efficient post-training of large language models.