Simultaneous Multi-objective Alignment Across Verifiable and Non-verifiable Rewards

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the multi-objective alignment challenge for large language models—simultaneously optimizing verifiable rewards (e.g., mathematical correctness), unverifiable subjective preferences (e.g., human values), and complex interactive settings (e.g., multi-turn AI tutoring). We propose MAH-DPO, a unified framework integrating multi-action-head direct preference optimization (DPO), vectorized reward modeling, and process reward model (PRM) training to enable joint cross-objective optimization. Crucially, MAH-DPO supports fine-grained, user-controllable trade-offs among objectives during inference. Experiments demonstrate significant improvements in mathematical reasoning, value alignment, and multi-turn dialogue tasks, with reduced inter-objective compromise, enhanced alignment flexibility, and improved controllability compared to prior methods.

Technology Category

Application Category

📝 Abstract
Aligning large language models to human preferences is inherently multidimensional, yet most pipelines collapse heterogeneous signals into a single optimizeable objective. We seek to answer what it would take to simultaneously align a model across various domains spanning those with: verifiable rewards (mathematical accuracy), non-verifiable subjective preferences (human values), and complex interactive scenarios (multi-turn AI tutoring dialogues). Such multi-objective reinforcement learning setups are often plagued by the individual objectives being at odds with each other, resulting in inefficient training and little user control during inference. We propose a unified framework that: (i) standardizes {process reward model} (PRM) training across both verifiable and non-verifiable settings to better supervise models'chain-of-thought reasoning; (ii) performs {multi-objective alignment} by training the LLM with our $ extbf{M}$ulti-$ extbf{A}$ction-$ extbf{H}$ead $ extbf{DPO}$ (MAH-DPO) and a vectorized reward where the dimensions of the vector correspond to the various objectives instead of a single scalar; and (iii) demonstrates how such a system provides fine-grained inference-time user control. Experiments across math reasoning, value alignment, and multi-turn dialogue show that our framework improves performance across multiple objectives simultaneously, while minimizing cross-objective trade-offs and enabling flexible inference time user control. The code can be found at https://github.com/pearls-lab/multiobj-align.
Problem

Research questions and friction points this paper is trying to address.

Aligning models across verifiable and non-verifiable reward domains
Resolving conflicts between competing objectives in multi-objective training
Providing fine-grained user control during model inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardizes process reward model training across settings
Uses multi-action head DPO for multi-objective alignment
Provides fine-grained inference-time user control
🔎 Similar Papers
No similar papers found.