Merge-Friendly Post-Training Quantization for Multi-Target Domain Adaptation

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-target domain adaptation, post-training quantized models struggle to fuse effectively due to domain-specific quantization, which restricts domain coverage and exacerbates discretization bias. Method: This paper proposes the first model-merging-friendly post-training quantization method, featuring a novel co-optimization framework that integrates Hessian-guided regularization with distance-aware quantization constraints to flatten the loss landscape and explicitly control quantization error propagation; it further introduces error-barrier analysis to guide granular quantization design. Results: Evaluated on multiple cross-domain benchmarks, the merged quantized models achieve significantly improved accuracy, 42% higher fusion stability, and 1.8× faster convergence, while preserving strong domain generalization capability and robustness to model merging.

Technology Category

Application Category

📝 Abstract
Model merging has emerged as a powerful technique for combining task-specific weights, achieving superior performance in multi-target domain adaptation. However, when applied to practical scenarios, such as quantized models, new challenges arise. In practical scenarios, quantization is often applied to target-specific data, but this process restricts the domain of interest and introduces discretization effects, making model merging highly non-trivial. In this study, we analyze the impact of quantization on model merging through the lens of error barriers. Leveraging these insights, we propose a novel post-training quantization, HDRQ - Hessian and distant regularizing quantization - that is designed to consider model merging for multi-target domain adaptation. Our approach ensures that the quantization process incurs minimal deviation from the source pre-trained model while flattening the loss surface to facilitate smooth model merging. To our knowledge, this is the first study on this challenge, and extensive experiments confirm its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Analyzes quantization impact on model merging in multi-target domains
Proposes HDRQ for minimal deviation and smooth model merging
Addresses discretization effects in quantized multi-target domain adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

HDRQ combines Hessian and distant regularization
Minimizes deviation from source pre-trained model
Flattens loss surface for smooth model merging
🔎 Similar Papers
No similar papers found.