Optimizing Diversity and Quality through Base-Aligned Model Collaboration

๐Ÿ“… 2025-11-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) exhibit improved output quality after alignment, yet suffer substantial degradation in generation diversity. To address this trade-off, we propose Base-Aligned Model Collaboration (BACo), a novel inference-time framework enabling token-level dynamic collaboration between a base model and an aligned modelโ€”without retraining or multiple sampling passes. BACo introduces a lightweight, interpretable routing mechanism that jointly leverages predictive uncertainty estimation and semantic role analysis to adaptively select tokens from either model during single-pass decoding. This achieves simultaneous optimization of both diversity and quality. Empirically, BACo outperforms state-of-the-art methods across 13 automated metrics, yielding a 21.3% joint improvement in diversity and quality; human evaluations further confirm significant gains over baselines. To our knowledge, this is the first work to realize fine-grained, interpretable, zero-overhead dual-model collaborative generation at inference time.

Technology Category

Application Category

๐Ÿ“ Abstract
Alignment has greatly improved large language models (LLMs)'output quality at the cost of diversity, yielding highly similar outputs across generations. We propose Base-Aligned Model Collaboration (BACo), an inference-time token-level model collaboration framework that dynamically combines a base LLM with its aligned counterpart to optimize diversity and quality. Inspired by prior work (Fei et al., 2025), BACo employs routing strategies that determine, at each token, from which model to decode based on next-token prediction uncertainty and predicted contents'semantic role. Prior diversity-promoting methods, such as retraining, prompt engineering, and multi-sampling methods, improve diversity but often degrade quality or require costly decoding or post-training. In contrast, BACo achieves both high diversity and quality post hoc within a single pass, while offering strong controllability. We explore a family of routing strategies, across three open-ended generation tasks and 13 metrics covering diversity and quality, BACo consistently surpasses state-of-the-art inference-time baselines. With our best router, BACo achieves a 21.3% joint improvement in diversity and quality. Human evaluations also mirror these improvements. The results suggest that collaboration between base and aligned models can optimize and control diversity and quality.
Problem

Research questions and friction points this paper is trying to address.

Aligned LLMs sacrifice diversity for quality, producing highly similar outputs
Existing diversity methods degrade quality or require costly decoding processes
Need token-level collaboration framework optimizing both diversity and quality simultaneously
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token-level collaboration between base and aligned models
Routing strategy based on prediction uncertainty and semantic role
Single-pass inference achieving diversity and quality optimization
๐Ÿ”Ž Similar Papers
No similar papers found.