Right Now, Wrong Then: Non-Stationary Direct Preference Optimization under Preference Drift

📅 2024-07-26
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing RLHF methods assume static human preferences, neglecting their non-stationary drift induced by societal changes and environmental evolution—leading to alignment degradation. This paper proposes Non-Stationary Direct Preference Optimization (NS-DPO), the first framework to model preference dynamics via a time-varying Bradley–Terry model and incorporate an exponentially time-weighted DPO loss. We theoretically derive an upper bound on estimation error under non-stationarity and prove offline convergence. NS-DPO integrates time-aware preference modeling, exponential discounting, and rigorous theoretical analysis. Experiments on simulated preference-drift scenarios demonstrate that NS-DPO significantly outperforms standard DPO and other baselines, achieving both robustness to drift and zero performance degradation in stationary settings. It provides the first solution for aligning LLMs under time-varying preferences with provable theoretical guarantees and empirical effectiveness.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) aligns Large Language Models (LLMs) with human preferences. However, these preferences can often change over time due to external factors (e.g. environment change and societal influence). Consequently, what was wrong then might be right now. Current preference optimization algorithms do not account for temporal preference drift in their modeling, which can lead to severe misalignment. To address this limitation, we use a Dynamic Bradley-Terry model that models preferences via time-dependent reward functions, and propose Non-Stationary Direct Preference Optimisation (NS-DPO). By introducing a discount parameter in the loss function, NS-DPO applies exponential weighting, which proportionally focuses learning on more time-relevant datapoints. We theoretically analyse the convergence of NS-DPO in the offline setting, providing upper bounds on the estimation error caused by non-stationary preferences. Finally, we demonstrate the effectiveness of NS-DPO1 for fine-tuning LLMs in scenarios with drifting preferences. By simulating preference drift using renowned reward models and modifying popular LLM datasets accordingly, we show that NS-DPO fine-tuned LLMs remain robust under non-stationarity, significantly outperforming baseline algorithms that ignore temporal preference changes, without sacrificing performance in stationary cases.
Problem

Research questions and friction points this paper is trying to address.

Addresses temporal preference drift in RLHF alignment
Models time-dependent rewards via Dynamic Bradley-Terry
Ensures LLM robustness under non-stationary preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Bradley-Terry model for time-dependent rewards
NS-DPO with exponential weighting in loss
Robust LLM fine-tuning under preference drift
🔎 Similar Papers
No similar papers found.