🤖 AI Summary
Existing RLHF approaches rely on costly human annotations, while RLAIF methods still require external reward models (e.g., GPT-4) or manually constructed preference data. To address these limitations, we propose Self-Alignment Optimization (SAO), the first fully self-generative alignment framework: the model autonomously generates diverse instructions, corresponding responses, and preference pairs—without human annotation, external reward models, or third-party APIs. SAO leverages role-playing to enhance instruction diversity and integrates an internal self-evaluation mechanism for preference modeling and reinforcement learning optimization. Evaluated on AlpacaEval 2.0, SAO achieves significant improvements in conversational quality while maintaining strong performance on downstream tasks—including question answering and mathematical reasoning—demonstrating its effectiveness, generalizability, and capacity for self-sustained improvement.
📝 Abstract
Traditional reinforcement learning from human feedback (RLHF) for large language models (LLMs) relies on expensive human-annotated datasets, while Reinforcement Learning from AI Feedback (RLAIF) also incurs significant costs, requiring the collection of diverse prompts and corresponding responses, often necessitating external reward models or proprietary models like GPT-4 to annotate preference pairs. In this work, we introduce Self-Alignment Optimization (SAO), a fully self-synthetic framework for LLM alignment, where all training data, including prompts (i.e., user queries), responses, and preferences, are generated by the model itself. Specifically, SAO first instructs the LLM to engage in persona role-play and generate diverse prompts and responses, which are then self-evaluated for preference optimization. Extensive experiments demonstrate that SAO effectively enhances the model's chat capabilities on standard benchmarks like AlpacaEval~2.0, while maintaining strong performance on downstream objective tasks (e.g., question-answering, math reasoning). Our work provides a practical solution for self-improvement in aligning LLMs, and the code for reproducing our results is available at: https://github.com/SJY8460/SAO.