Merge and Guide: Unifying Model Merging and Guided Decoding for Controllable Multi-Objective Generation

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-objective controllable generation faces challenges including diverse user requirements, inefficient parameter-level control, high decoding-guidance overhead, and overreliance on single-model capabilities. Method: This paper proposes MAGE, a two-stage framework that (i) identifies and resolves compatibility mismatches between guidance and base models; (ii) dynamically constructs multi-objective fused base models via model merging and unifies explicit/implicit value models as collaborative guidance agents; and (iii) integrates linear mode connectivity analysis, predictive ensembling, and two-stage guided decoding to jointly optimize parameter- and decoding-level control. Contribution/Results: Experiments demonstrate that MAGE significantly outperforms state-of-the-art methods in controllability, Pareto optimality, and cross-task adaptability, while reducing memory overhead and enhancing multi-objective coordination.

Technology Category

Application Category

📝 Abstract
Adapting to diverse user needs at test time is a key challenge in controllable multi-objective generation. Existing methods are insufficient: merging-based approaches provide indirect, suboptimal control at the parameter level, often disregarding the impacts of multiple objectives. While decoding-based guidance is more direct, it typically requires aggregating logits from multiple expert models, incurring significant space overhead and relying heavily on individual model capacity. To address these issues, we introduce Merge-And-GuidE (MAGE), a two-stage framework that leverages model merging for guided decoding. We first identify a critical compatibility problem between the guidance and base models. In Stage 1, MAGE resolves this by dynamically constructing a more robust base model, merging a series of backbone models that account for multiple objectives. In Stage 2, we merge explicit and implicit value models into a unified guidance proxy, which then steers the decoding of the base model from Stage 1. Our analysis empirically validates Linear Mode Connectivity (LMC) in value models, explores the relationship between model merging and prediction ensembling, and demonstrates the enhanced controllability afforded by our approach. Extensive experiments show that our method outperforms existing approaches, achieving superior controllability, Pareto-optimal performance, and enhanced adaptability.
Problem

Research questions and friction points this paper is trying to address.

Addressing insufficient control in multi-objective text generation
Resolving compatibility between guidance and base models
Reducing space overhead from multiple expert model aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework combining model merging and guided decoding
Dynamic base model construction from multiple backbone models
Unified guidance proxy merging explicit and implicit value models
🔎 Similar Papers
No similar papers found.
Guofu Xie
Guofu Xie
Renmin University of China
Large Language ModelReinforcement Learning
C
Chen Zhang
Gaoling School of Artificial Intelligence, Renmin University of China
X
Xiao Zhang
Gaoling School of Artificial Intelligence, Renmin University of China
Y
Yunsheng Shi
Tencent
T
Ting Yao
Tencent
J
Jun Xu
Gaoling School of Artificial Intelligence, Renmin University of China