🤖 AI Summary
This work addresses the lack of interactive self-correction capability in large language models (LLMs). We propose ARIES, an Iterative Preference Optimization framework that establishes a “training–inference co-adaptive” self-correction data闭环: during inference, it generates real-time self-corrected responses; employs rule-guided selection—replacing conventional reward modeling—for lightweight, efficient data distillation; and dynamically constructs preference pairs to iteratively refine DPO. This enables sustained error correction and response refinement. Experiments show ARIES achieves 62.3% LC win rate and 63.3% raw win rate on AlpacaEval 2, outperforming Iterative DPO by 27.8% and 35.5%, respectively; attains 50.3% win rate on Arena-Hard; and significantly improves mathematical reasoning performance on GSM8K and MATH.
📝 Abstract
A truly intelligent Large Language Model (LLM) should be capable of correcting errors in its responses through external interactions. However, even the most advanced models often face challenges in improving their outputs. In this paper, we explore how to cultivate LLMs with the self-refinement capability through iterative preference training, and how this ability can be leveraged to improve model performance during inference. To this end, we introduce a novel post-training and inference framework, called ARIES: Adaptive Refinement and Iterative Enhancement Structure. This method iteratively performs preference training and self-refinement-based data collection. During training, ARIES strengthen the model's direct question-answering capability while simultaneously unlocking its self-refinement potential. During inference, ARIES harnesses this self-refinement capability to generate a series of progressively refined responses, which are then filtered using either the Reward Model Scoring or a simple yet effective Rule-Based Selection mechanism, specifically tailored to our approach, to construct a dataset for the next round of preference training. Experimental results demonstrate the remarkable performance of ARIES. When applied to the Llama-3.1-8B model and under the self-refinement setting, ARIES surpasses powerful models such as GPT-4o, achieving 62.3% length-controlled (LC) and a 63.3% raw win rates on AlpacaEval 2, outperforming Iterative DPO by 27.8% and 35.5% respectively, as well as a 50.3% win rate on Arena-Hard, surpassing Iterative DPO by 26.6%. Furthermore, ARIES consistently enhances performance on mathematical reasoning tasks like GSM8K and MATH.