ARIES: Stimulating Self-Refinement of Large Language Models by Iterative Preference Optimization

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of interactive self-correction capability in large language models (LLMs). We propose ARIES, an Iterative Preference Optimization framework that establishes a “training–inference co-adaptive” self-correction data闭环: during inference, it generates real-time self-corrected responses; employs rule-guided selection—replacing conventional reward modeling—for lightweight, efficient data distillation; and dynamically constructs preference pairs to iteratively refine DPO. This enables sustained error correction and response refinement. Experiments show ARIES achieves 62.3% LC win rate and 63.3% raw win rate on AlpacaEval 2, outperforming Iterative DPO by 27.8% and 35.5%, respectively; attains 50.3% win rate on Arena-Hard; and significantly improves mathematical reasoning performance on GSM8K and MATH.

Technology Category

Application Category

📝 Abstract
A truly intelligent Large Language Model (LLM) should be capable of correcting errors in its responses through external interactions. However, even the most advanced models often face challenges in improving their outputs. In this paper, we explore how to cultivate LLMs with the self-refinement capability through iterative preference training, and how this ability can be leveraged to improve model performance during inference. To this end, we introduce a novel post-training and inference framework, called ARIES: Adaptive Refinement and Iterative Enhancement Structure. This method iteratively performs preference training and self-refinement-based data collection. During training, ARIES strengthen the model's direct question-answering capability while simultaneously unlocking its self-refinement potential. During inference, ARIES harnesses this self-refinement capability to generate a series of progressively refined responses, which are then filtered using either the Reward Model Scoring or a simple yet effective Rule-Based Selection mechanism, specifically tailored to our approach, to construct a dataset for the next round of preference training. Experimental results demonstrate the remarkable performance of ARIES. When applied to the Llama-3.1-8B model and under the self-refinement setting, ARIES surpasses powerful models such as GPT-4o, achieving 62.3% length-controlled (LC) and a 63.3% raw win rates on AlpacaEval 2, outperforming Iterative DPO by 27.8% and 35.5% respectively, as well as a 50.3% win rate on Arena-Hard, surpassing Iterative DPO by 26.6%. Furthermore, ARIES consistently enhances performance on mathematical reasoning tasks like GSM8K and MATH.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLM self-refinement via iterative training
Improve model performance during inference
Generate refined responses using ARIES framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative preference training framework
Self-refinement-based data collection
Reward Model Scoring mechanism
🔎 Similar Papers
No similar papers found.
Yongcheng Zeng
Yongcheng Zeng
University of Chinese Academy of Sciences
LLMReinforcement Learning
X
Xinyu Cui
Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences
Xuanfa Jin
Xuanfa Jin
Ph.D. student, Institute of Automation, Chinese Academy of Sciences
Reinforcement LearningLarge Language ModelsMulti-Agent Reinforcement Learning
Guoqing Liu
Guoqing Liu
Microsoft Research AI for Science
Artificial IntelligenceReinforcement LearningLarge Language ModelsAI for Science
Zexu Sun
Zexu Sun
Renmin University of China
Causal inferenceReinforcement learningLarge language model
Q
Quan He
Huawei Noah’s Ark Lab, China
D
Dong Li
Huawei Noah’s Ark Lab, China
N
Ning Yang
Institute of Automation, Chinese Academy of Sciences
Jianye Hao
Jianye Hao
Huawei Noah's Ark Lab/Tianjin University
Multiagent SystemsEmbodied AI
H
Haifeng Zhang
Institute of Automation, Chinese Academy of Sciences
J
Jun Wang
University College London