The Fellowship of the LLMs: Multi-Agent Workflows for Synthetic Preference Optimization Dataset Generation

📅 2024-08-16
🏛️ arXiv.org
📈 Citations: 8
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and scalability limitations of human-annotated preference optimization (PO) datasets. To this end, we propose an end-to-end automated multi-agent synthetic framework. Methodologically, we decouple response generation from evaluation and introduce three novel LLM-based evaluation paradigms: “LLM-as-a-Judge”, “LLM-as-a-Jury”, and “LLM Debate”. We further design a generator-reviewer feedback loop architecture to enable collaborative optimization. Technically, the framework integrates GPT-4o, Llama, and Gemma with multi-stage prompt engineering, win-rate-driven multi-agent configuration search, and synthetic data distillation. Experimental results demonstrate that GPT-4o achieves the highest inter-annotator agreement as a judge; the Llama–Gemma feedback loop attains win rates of 71.8% and 73.8%, respectively—significantly outperforming single-agent baselines. Our framework enables scalable, high-quality PO dataset synthesis without human annotation.

Technology Category

Application Category

📝 Abstract
This paper presents a novel methodology for generating synthetic Preference Optimization (PO) datasets using multi-agent workflows. We evaluate the effectiveness and potential of these workflows in automating and enhancing the dataset generation process. PO dataset generation requires two modules: (1) response evaluation, and (2) response generation. In the response evaluation module, the responses from Large Language Models (LLMs) are evaluated and ranked - a task typically carried out by human annotators that we automate using LLMs. We assess the response evaluation module in a 2 step process. In step 1, we assess LLMs as evaluators using three distinct prompting strategies. In step 2, we apply the winning prompting strategy to compare the performance of LLM-as-a-Judge, LLMs-as-a-Jury, and LLM Debate. Our evaluation shows that GPT-4o-as-a-Judge is more consistent across all datasets. For the response generation module, we use the identified LLM evaluator configuration and compare different configurations of the LLM Feedback Loop. We use the win rate to determine the best multi-agent configuration for generation. Experimenting with various configurations, we find that the LLM Feedback Loop, with Llama as the generator and Gemma as the reviewer, achieves a notable 71.8% and 73.8% win rate over single-agent Llama and Gemma, respectively. After identifying the best configurations for both modules, we generate our PO datasets using the above pipeline.
Problem

Research questions and friction points this paper is trying to address.

Automating synthetic Preference Optimization dataset generation using multi-model workflows
Evaluating LLMs as automated judges for response ranking and feedback
Optimizing multi-agent configurations for improved response generation and review
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-model workflows automate PO dataset generation
LLMs replace human annotators in response evaluation
LLM Feedback Loop enhances response generation efficiency