Reverse Preference Optimization for Complex Instruction Following

๐Ÿ“… 2025-05-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) struggle with following complex, multi-constrained instructions, and conventional preference learning suffers from coarse-grained constraint satisfaction evaluation, leading to noisy preference pairs. To address this, we propose Reverse Preference Optimization (RPO), a novel framework that dynamically constructs โ€œperfectโ€ positive responses satisfying all constraints via reverse constraint reasoning, enabling fine-grained alignment between model outputs and individual constraints. RPO integrates constraint-aware reverse generation, dynamic response-constraint matching, and multi-turn instruction fine-tuning within the DPO paradigm. This approach significantly widens the margin between high- and low-quality responses, reducing sampling and filtering overhead. Experiments on Sysbench and Multi-IF benchmarks show that RPO improves DPO by +4.6 and +2.5 points on Llama-3.1-8B, respectively; moreover, the Llama-3.1-70B variant surpasses GPT-4o in constrained instruction following.

Technology Category

Application Category

๐Ÿ“ Abstract
Instruction following (IF) is a critical capability for large language models (LLMs). However, handling complex instructions with multiple constraints remains challenging. Previous methods typically select preference pairs based on the number of constraints they satisfy, introducing noise where chosen examples may fail to follow some constraints and rejected examples may excel in certain respects over the chosen ones. To address the challenge of aligning with multiple preferences, we propose a simple yet effective method called Reverse Preference Optimization (RPO). It mitigates noise in preference pairs by dynamically reversing the constraints within the instruction to ensure the chosen response is perfect, alleviating the burden of extensive sampling and filtering to collect perfect responses. Besides, reversal also enlarges the gap between chosen and rejected responses, thereby clarifying the optimization direction and making it more robust to noise. We evaluate RPO on two multi-turn IF benchmarks, Sysbench and Multi-IF, demonstrating average improvements over the DPO baseline of 4.6 and 2.5 points (on Llama-3.1 8B), respectively. Moreover, RPO scales effectively across model sizes (8B to 70B parameters), with the 70B RPO model surpassing GPT-4o.
Problem

Research questions and friction points this paper is trying to address.

Handling complex instructions with multiple constraints in LLMs
Reducing noise in preference pairs for better alignment
Improving robustness and performance in multi-turn instruction following
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reverse constraints to ensure perfect responses
Dynamic reversal clarifies optimization direction
Effective scaling across different model sizes
๐Ÿ”Ž Similar Papers
No similar papers found.