GAPO: Learning Preferential Prompt through Generative Adversarial Policy Optimization

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models exhibit weak output control under fine-grained, pre-defined constraints, suffering from high hallucination rates and poor robustness. To address this, we propose the Generative Adversarial Strategy Optimization (GASO) framework: it employs adversarial training to dynamically synthesize progressively harder constraint-aware samples; jointly trains a lightweight encoder-only reward model to capture prompt–response preference relationships; and enables progressive, adaptive optimization of constraint understanding. GASO is the first method to deeply integrate the GAN paradigm with preference-based prompt learning—extending DPO and KTO—thereby circumventing policy collapse and reward hacking inherent in reinforcement learning approaches. Evaluated across multiple benchmarks for fine-grained constrained generation, GASO significantly outperforms PPO, DPO, and KTO, achieving an average accuracy gain of 12.7% and reducing hallucination rates by 34.5%.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models have highlighted the critical need for precise control over model outputs through predefined constraints. While existing methods attempt to achieve this through either direct instruction-response synthesis or preferential response optimization, they often struggle with constraint understanding and adaptation. This limitation becomes particularly evident when handling fine-grained constraints, leading to either hallucination or brittle performance. We introduce Generative Adversarial Policy Optimization (GAPO), a novel framework that combines GAN-based training dynamics with an encoder-only reward model to progressively learn and adapt to increasingly complex constraints. GAPO leverages adversarial training to automatically generate training samples of varying difficulty while utilizing the encoder-only architecture to better capture prompt-response relationships. Extensive experiments demonstrate GAPO's superior performance across multiple benchmarks, particularly in scenarios requiring fine-grained constraint handling, where it significantly outperforms existing methods like PPO, DPO, and KTO. Our results suggest that GAPO's unique approach to preferential prompt learning offers a more robust and effective solution for controlling LLM outputs. Code is avaliable in https://github.com/MikeGu721/GAPO.
Problem

Research questions and friction points this paper is trying to address.

Control model outputs via precise constraints
Improve constraint understanding and adaptation
Handle fine-grained constraints robustly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines GAN training with encoder-only reward
Generates varying difficulty samples automatically
Improves fine-grained constraint handling significantly
🔎 Similar Papers
No similar papers found.
Zhouhong Gu
Zhouhong Gu
Fudan University
Language ModelingAutomated SocietyModel Editing
X
Xingzhou Chen
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
X
Xiaoran Shi
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
T
Tao Wang
Alibaba Group
S
Suhang Zheng
Alibaba Group
T
Tianyu Li
Alibaba Group
Hongwei Feng
Hongwei Feng
Fudan University
knowledge management,AI,big data
Y
Yanghua Xiao
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University