A Novel Hierarchical Integration Method for Efficient Model Merging in Medical LLMs

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address privacy leakage, high computational overhead, and catastrophic forgetting in cross-institutional knowledge fusion of large language models (LLMs) for distributed healthcare, this paper proposes an efficient parameter-space model merging framework tailored for medical edge deployment. Our method innovatively integrates selective optimal transport with cosine-similarity-weighted interpolation to establish a hierarchical fusion strategy, effectively mitigating permutation variance while reducing computational complexity. We systematically evaluate Task Arithmetic, linear averaging, DARE-TIES, DELLA, and Breadcrumbs against our approach on five medical benchmarks using Mistral-7B. Results show that simple averaging exhibits robustness, while Task Arithmetic achieves 45.80% accuracy on MedQA. Our hierarchical fusion strikes a superior balance between accuracy and efficiency—delivering enhanced privacy preservation, robustness, and scalability—thus establishing a novel ensemble paradigm for resource-constrained medical edge environments.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) face significant challenges in distributed healthcare, including consolidating specialized domain knowledge across institutions while maintaining privacy, reducing computational overhead, and preventing catastrophic forgetting during model updates.This paper presents a systematic evaluation of six parameter-space merging techniques applied to two architecturally compatible medical LLMs derived from the Mistral-7B base model. We introduce a novel hierarchical method that combines selective Optimal Transport (OT) alignment for attention layers with cosine similarity-weighted interpolation, designed to address permutation variance while minimizing computational overhead for edge deployment scenarios. Our study evaluates Task Arithmetic, Linear Averaging, DARE-TIES, DELLA, Breadcrumbs, and our Hierarchical approach across five medical benchmarks. Results demonstrate that architecturally compatible models benefit significantly from simple averaging methods, with Task Arithmetic achieving 45.80% accuracy on MedQA, outperforming complex pruning-based approaches. These findings offer critical insights for the deployment of distributed medical AI in resource-constrained IoT environments, where computational efficiency and model compatibility are paramount. Our work establishes that for architecturally compatible models, simple averaging provides a robust and computationally efficient baseline for knowledge consolidation, offering a pragmatic path forward for scalable medical AI systems.
Problem

Research questions and friction points this paper is trying to address.

Consolidating specialized medical knowledge across institutions while preserving data privacy
Reducing computational overhead for model deployment in resource-constrained healthcare environments
Preventing catastrophic forgetting during model updates in distributed medical LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical method combines Optimal Transport alignment
Uses cosine similarity-weighted interpolation for layers
Minimizes computational overhead for edge deployment
🔎 Similar Papers
No similar papers found.
P
Prakrit Timilsina
Think for Tech, Kathmandu, Nepal
A
Anuj Nepal
Deakin Cyber Research and Innovation Centre, Deakin University / Universal Higher Education, Melbourne, Australia
R
Rajan Kadel
School of IT and Engineering, Melbourne Institute of Technology, Melbourne, Australia
Robin Doss
Robin Doss
Deakin Cyber Research & Innovation Centre (Deakin Cyber), Deakin University
Cyber SecurityPrivacy