🤖 AI Summary
To address privacy leakage, high computational overhead, and catastrophic forgetting in cross-institutional knowledge fusion of large language models (LLMs) for distributed healthcare, this paper proposes an efficient parameter-space model merging framework tailored for medical edge deployment. Our method innovatively integrates selective optimal transport with cosine-similarity-weighted interpolation to establish a hierarchical fusion strategy, effectively mitigating permutation variance while reducing computational complexity. We systematically evaluate Task Arithmetic, linear averaging, DARE-TIES, DELLA, and Breadcrumbs against our approach on five medical benchmarks using Mistral-7B. Results show that simple averaging exhibits robustness, while Task Arithmetic achieves 45.80% accuracy on MedQA. Our hierarchical fusion strikes a superior balance between accuracy and efficiency—delivering enhanced privacy preservation, robustness, and scalability—thus establishing a novel ensemble paradigm for resource-constrained medical edge environments.
📝 Abstract
Large Language Models (LLMs) face significant challenges in distributed healthcare, including consolidating specialized domain knowledge across institutions while maintaining privacy, reducing computational overhead, and preventing catastrophic forgetting during model updates.This paper presents a systematic evaluation of six parameter-space merging techniques applied to two architecturally compatible medical LLMs derived from the Mistral-7B base model. We introduce a novel hierarchical method that combines selective Optimal Transport (OT) alignment for attention layers with cosine similarity-weighted interpolation, designed to address permutation variance while minimizing computational overhead for edge deployment scenarios. Our study evaluates Task Arithmetic, Linear Averaging, DARE-TIES, DELLA, Breadcrumbs, and our Hierarchical approach across five medical benchmarks. Results demonstrate that architecturally compatible models benefit significantly from simple averaging methods, with Task Arithmetic achieving 45.80% accuracy on MedQA, outperforming complex pruning-based approaches. These findings offer critical insights for the deployment of distributed medical AI in resource-constrained IoT environments, where computational efficiency and model compatibility are paramount. Our work establishes that for architecturally compatible models, simple averaging provides a robust and computationally efficient baseline for knowledge consolidation, offering a pragmatic path forward for scalable medical AI systems.