Robust Multi-Objective Preference Alignment with Online DPO

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of dynamically adapting large language models (LLMs) during inference to variable weights and conflicting human preferences across multiple objectives. We propose Multi-Objective Online Direct Preference Optimization (MO-ODPO), the first preference-conditioned single-policy framework that decouples dynamic objective weighting from prompt-conditioned policy modeling, enabling zero-shot switching among conflicting objectives without retraining. MO-ODPO integrates an online DPO-based optimization framework with Pareto-frontier evaluation to eliminate redundant multi-policy training. Evaluated on two major benchmarks, MO-ODPO achieves comprehensive Pareto superiority over existing baselines, significantly enhancing inference-time multi-objective steerability—effectively balancing configurability, personalization, and safety—while maintaining training efficiency.

Technology Category

Application Category

📝 Abstract
Multi-objective preference alignment of large language models (LLMs) is critical for developing AI systems that are more configurable, personalizable, helpful, and safe. However, optimizing model outputs to satisfy diverse objectives with variable weights at inference time for truly personalized models presents a significant challenge. Existing approaches are either computationally expensive to train or do not sufficiently steer model behaviors. This paper introduces the Multi-Objective Online DPO (MO-ODPO) algorithm, designed to robustly and efficiently align model behaviors with multiple, potentially conflicting human preferences. Our approach incorporates a prompt conditioning mechanism, allowing us to train a single preference-conditional policy, that can adapt to new preference combinations at inference. Experiments on two popular benchmarks show that MO-ODPO Pareto-dominates existing baselines while providing excellent inference-time steerability between diverse objectives.
Problem

Research questions and friction points this paper is trying to address.

Aligns LLMs with multiple human preferences efficiently.
Addresses computational expense in multi-objective optimization.
Enables inference-time adaptability to new preference combinations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

MO-ODPO algorithm for multi-objective alignment
Prompt conditioning for adaptive preference combinations
Efficient inference-time steerability across objectives