🤖 AI Summary
Current large language models exhibit weak output control under fine-grained, pre-defined constraints, suffering from high hallucination rates and poor robustness. To address this, we propose the Generative Adversarial Strategy Optimization (GASO) framework: it employs adversarial training to dynamically synthesize progressively harder constraint-aware samples; jointly trains a lightweight encoder-only reward model to capture prompt–response preference relationships; and enables progressive, adaptive optimization of constraint understanding. GASO is the first method to deeply integrate the GAN paradigm with preference-based prompt learning—extending DPO and KTO—thereby circumventing policy collapse and reward hacking inherent in reinforcement learning approaches. Evaluated across multiple benchmarks for fine-grained constrained generation, GASO significantly outperforms PPO, DPO, and KTO, achieving an average accuracy gain of 12.7% and reducing hallucination rates by 34.5%.
📝 Abstract
Recent advances in large language models have highlighted the critical need for precise control over model outputs through predefined constraints. While existing methods attempt to achieve this through either direct instruction-response synthesis or preferential response optimization, they often struggle with constraint understanding and adaptation. This limitation becomes particularly evident when handling fine-grained constraints, leading to either hallucination or brittle performance. We introduce Generative Adversarial Policy Optimization (GAPO), a novel framework that combines GAN-based training dynamics with an encoder-only reward model to progressively learn and adapt to increasingly complex constraints. GAPO leverages adversarial training to automatically generate training samples of varying difficulty while utilizing the encoder-only architecture to better capture prompt-response relationships. Extensive experiments demonstrate GAPO's superior performance across multiple benchmarks, particularly in scenarios requiring fine-grained constraint handling, where it significantly outperforms existing methods like PPO, DPO, and KTO. Our results suggest that GAPO's unique approach to preferential prompt learning offers a more robust and effective solution for controlling LLM outputs. Code is avaliable in https://github.com/MikeGu721/GAPO.