Anyprefer: An Agentic Framework for Preference Data Synthesis

📅 2025-04-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high annotation cost and self-reward bias—arising from weight coupling between the target model and reward model—that hinder alignment performance in preference data generation, this paper proposes a proxy-based cooperative game framework. It decouples the target model from an external enhanced discriminator and jointly synthesizes high-quality preference data within a Markov game setting. Key innovations include a dual-agent paradigm, external tool invocation (e.g., retrieval and verification), a dynamic prompt optimization feedback mechanism, and a standardized preference data compilation pipeline. We release Anyprefer-V1, the first high-quality cross-modal preference dataset (58K pairs). Evaluated on 21 cross-modal benchmarks, our method achieves significant alignment improvements, with average gains of 3.66%–30.05%, effectively mitigating model bias and reducing reliance on human annotation.

Technology Category

Application Category

📝 Abstract
High-quality preference data is essential for aligning foundation models with human values through preference learning. However, manual annotation of such data is often time-consuming and costly. Recent methods often adopt a self-rewarding approach, where the target model generates and annotates its own preference data, but this can lead to inaccuracies since the reward model shares weights with the target model, thereby amplifying inherent biases. To address these issues, we propose Anyprefer, a framework designed to synthesize high-quality preference data for aligning the target model. Anyprefer frames the data synthesis process as a cooperative two-player Markov Game, where the target model and the judge model collaborate together. Here, a series of external tools are introduced to assist the judge model in accurately rewarding the target model's responses, mitigating biases in the rewarding process. In addition, a feedback mechanism is introduced to optimize prompts for both models, enhancing collaboration and improving data quality. The synthesized data is compiled into a new preference dataset, Anyprefer-V1, consisting of 58K high-quality preference pairs. Extensive experiments show that Anyprefer significantly improves model alignment performance across four main applications, covering 21 datasets, achieving average improvements of 18.55% in five natural language generation datasets, 3.66% in nine vision-language understanding datasets, 30.05% in three medical image analysis datasets, and 16.00% in four visuo-motor control tasks.
Problem

Research questions and friction points this paper is trying to address.

Automating high-quality preference data synthesis for model alignment
Reducing biases in self-rewarding preference learning methods
Enhancing collaboration between target and judge models via external tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-player Markov Game for data synthesis
External tools mitigate reward biases
Feedback mechanism optimizes model prompts
🔎 Similar Papers
No similar papers found.