๐ค AI Summary
Traditional Mixture-of-Experts (MoE) models lack explicit modeling of anatomical structures and regional disease heterogeneity in medical imaging, limiting performance in interstitial lung disease (ILD) classification. To address this, we propose Regional Expert Networks (REN)โthe first MoE framework incorporating anatomical priors: the lung is partitioned into seven clinically defined anatomical regions, each assigned a dedicated expert network. A multimodal gating mechanism dynamically integrates deep features extracted by CNN, ViT, and Mamba backbones with radiomic biomarkers. The architecture ensures both interpretability and scalability. Experiments demonstrate that REN achieves a mean AUC of 0.8646 on ILD classification, outperforming SwinUNETR by 12.5%. Notably, experts for the lower lung lobes achieve AUCs of 0.88โ0.90โsignificantly surpassing baselines and aligning with established clinical and pathological distributions of ILD.
๐ Abstract
Mixture-of-Experts (MoE) architectures have significantly contributed to scalable machine learning by enabling specialized subnetworks to tackle complex tasks efficiently. However, traditional MoE systems lack domain-specific constraints essential for medical imaging, where anatomical structure and regional disease heterogeneity strongly influence pathological patterns. Here, we introduce Regional Expert Networks (REN), the first anatomically-informed MoE framework tailored specifically for medical image classification. REN leverages anatomical priors to train seven specialized experts, each dedicated to distinct lung lobes and bilateral lung combinations, enabling precise modeling of region-specific pathological variations. Multi-modal gating mechanisms dynamically integrate radiomics biomarkers and deep learning (DL) features (CNN, ViT, Mamba) to weight expert contributions optimally. Applied to interstitial lung disease (ILD) classification, REN achieves consistently superior performance: the radiomics-guided ensemble reached an average AUC of 0.8646 +/- 0.0467, a +12.5 percent improvement over the SwinUNETR baseline (AUC 0.7685, p = 0.031). Region-specific experts further revealed that lower-lobe models achieved AUCs of 0.88-0.90, surpassing DL counterparts (CNN: 0.76-0.79) and aligning with known disease progression patterns. Through rigorous patient-level cross-validation, REN demonstrates strong generalizability and clinical interpretability, presenting a scalable, anatomically-guided approach readily extensible to other structured medical imaging applications.