🤖 AI Summary
Multi-objective controllable generation faces challenges including diverse user requirements, inefficient parameter-level control, high decoding-guidance overhead, and overreliance on single-model capabilities. Method: This paper proposes MAGE, a two-stage framework that (i) identifies and resolves compatibility mismatches between guidance and base models; (ii) dynamically constructs multi-objective fused base models via model merging and unifies explicit/implicit value models as collaborative guidance agents; and (iii) integrates linear mode connectivity analysis, predictive ensembling, and two-stage guided decoding to jointly optimize parameter- and decoding-level control. Contribution/Results: Experiments demonstrate that MAGE significantly outperforms state-of-the-art methods in controllability, Pareto optimality, and cross-task adaptability, while reducing memory overhead and enhancing multi-objective coordination.
📝 Abstract
Adapting to diverse user needs at test time is a key challenge in controllable multi-objective generation. Existing methods are insufficient: merging-based approaches provide indirect, suboptimal control at the parameter level, often disregarding the impacts of multiple objectives. While decoding-based guidance is more direct, it typically requires aggregating logits from multiple expert models, incurring significant space overhead and relying heavily on individual model capacity. To address these issues, we introduce Merge-And-GuidE (MAGE), a two-stage framework that leverages model merging for guided decoding. We first identify a critical compatibility problem between the guidance and base models. In Stage 1, MAGE resolves this by dynamically constructing a more robust base model, merging a series of backbone models that account for multiple objectives. In Stage 2, we merge explicit and implicit value models into a unified guidance proxy, which then steers the decoding of the base model from Stage 1. Our analysis empirically validates Linear Mode Connectivity (LMC) in value models, explores the relationship between model merging and prediction ensembling, and demonstrates the enhanced controllability afforded by our approach. Extensive experiments show that our method outperforms existing approaches, achieving superior controllability, Pareto-optimal performance, and enhanced adaptability.