UniARM: Towards a Unified Autoregressive Reward Model for Multi-Objective Test-Time Alignment

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of multi-objective alignment in autoregressive reward modeling, where existing approaches often suffer from entangled preference representations or insufficient interaction due to parameter or feature separation across objectives. To overcome this, the authors propose UniARM, a unified autoregressive reward model that jointly models all preference dimensions within a shared parameter space. Central to UniARM is a novel Mixture-of-Shared-Low-Rank Adaptation (MoSLoRA) architecture: it first extracts preference-agnostic shared features and then applies disentangled modulation via mixture preference vectors, enabling precise and controllable trade-offs among multiple objectives at inference time. UniARM is the first framework to support multi-objective alignment within a single model while maintaining compatibility with frozen large language models, thereby facilitating efficient deployment on large-scale systems and significantly enhancing controllability and practical utility.

Technology Category

Application Category

📝 Abstract
Multi-objective alignment aims to align LLM responses with multiple human preference objectives. Among existing methods, guiding the generation of frozen LLMs through autoregressive reward models (ARMs) to accomplish multi-objective test-time alignment is a low-cost solution. However, these methods typically rely on independent parameters for each preference objective, either by training ARMs independently across preference dimensions, which neglects interactions among preference features, or by training a single ARM with separate feature extraction modules for each preference, which can cause feature entanglement. Both strategies can result in misalignment between generated outputs and user preferences. To address this limitation, we propose Preference-Modulated \&Shared Low-Rank Adaptation (MoSLoRA) for ARM training, which first extracts shared features via a preference-agnostic module and then applies affine transformations to shared features via a preference modulation module conditioned on mixed preference vectors. This design mitigates feature entanglement and enables precise control over preference trade-offs during inference. Building on this, we introduce the Unified Autoregressive Reward Model (UniARM), a novel framework for multi-objective test-time alignment. UniARM jointly models all preference dimensions in a single parameter space, eliminating the need for independent parameters for each preference objective. es on larger-scale LLMs, enhancing its practical usability.
Problem

Research questions and friction points this paper is trying to address.

multi-objective alignment
autoregressive reward model
feature entanglement
test-time alignment
preference modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Autoregressive Reward Model
Multi-objective Alignment
Preference Modulation
Shared Low-Rank Adaptation
Test-Time Alignment
🔎 Similar Papers
No similar papers found.
H
Hongyan Xie
School of Computer, Beihang University
Yikun Ban
Yikun Ban
Beihang University, University of Illinois Urbana-Champaign
Reinforcement LearningEnsemble Learning
R
Ruiyu Fang
Institute of Artificial Intelligence (TeleAI), China Telecom
Z
Zixuan Huang
School of Computer, Beihang University
D
Deqing Wang
School of Computer, Beihang University
Jianxin Li
Jianxin Li
School of Computer Science & Engineering, Beihang University
Big DataAIIntelligent Computing
Y
Yitong Yao
Institute of Artificial Intelligence (TeleAI), China Telecom
C
Chao Wang
Institute of Artificial Intelligence (TeleAI), China Telecom
S
Shuangyong Song
Institute of Artificial Intelligence (TeleAI), China Telecom