🤖 AI Summary
Existing preference optimization methods (e.g., DPO) implicitly assume majority preferences, neglecting minority viewpoints and latent user intentions embedded in prompts—leading to alignment bias and insufficient robustness. To address this, we propose A-IPO, an *intention-driven adaptive preference optimization* framework: it explicitly models user intent within the reward function and augments DPO with an *intent–response similarity regularization term*. We design a dedicated intent inference module and introduce three novel benchmarks—Real-pref, Attack-pref, and GlobalOpinionQA-Ext—to evaluate multi-intent modeling, adversarial robustness, and global opinion compatibility. Experiments demonstrate that A-IPO achieves up to 24.8% higher win rates, 52.2% improved defense success rates against adversarial preferences, and 54.6% greater intent consistency—significantly enhancing alignment accuracy, robustness, and inclusivity in preference learning.
📝 Abstract
Human preferences are diverse and dynamic, shaped by regional, cultural, and social factors. Existing alignment methods like Direct Preference Optimization (DPO) and its variants often default to majority views, overlooking minority opinions and failing to capture latent user intentions in prompts.
To address these limitations, we introduce underline{ extbf{A}}daptive extbf{underline{I}}ntent-driven extbf{underline{P}}reference extbf{underline{O}}ptimization ( extbf{A-IPO}). Specifically,A-IPO introduces an intention module that infers the latent intent behind each user prompt and explicitly incorporates this inferred intent into the reward function, encouraging stronger alignment between the preferred model's responses and the user's underlying intentions. We demonstrate, both theoretically and empirically, that incorporating an intention--response similarity term increases the preference margin (by a positive shift of $λ,Δmathrm{sim}$ in the log-odds), resulting in clearer separation between preferred and dispreferred responses compared to DPO.
For evaluation, we introduce two new benchmarks, Real-pref, Attack-pref along with an extended version of an existing dataset, GlobalOpinionQA-Ext, to assess real-world and adversarial preference alignment.
Through explicit modeling of diverse user intents,A-IPO facilitates pluralistic preference optimization while simultaneously enhancing adversarial robustness in preference alignment. Comprehensive empirical evaluation demonstrates that A-IPO consistently surpasses existing baselines, yielding substantial improvements across key metrics: up to +24.8 win-rate and +45.6 Response-Intention Consistency on Real-pref; up to +38.6 Response Similarity and +52.2 Defense Success Rate on Attack-pref; and up to +54.6 Intention Consistency Score on GlobalOpinionQA-Ext.