π€ AI Summary
Poor generalization and robustness of 3D medical image analysis stem from scanning protocol variations, equipment heterogeneity, and patient motion. To address this, we propose an Enhancement-Aware Multi-Encoder Learning (EAMEL) framework. Our method treats diverse image augmentations as complementary feature sources and introduces a novel adaptive controller module (BD) that enables protocol-agnostic, structurally faithful feature fusion while preserving augmentation-specific representations. Integrating deep generative modeling, a multi-branch encoder architecture, and feature-level adaptive fusion, EAMEL is applied to CT-to-T1-MRI cross-modal translation. Experiments demonstrate statistically significant improvements over state-of-the-art baselines in PSNR (+2.1 dB) and SSIM (+0.042), alongside superior robustness to geometric transformations and input perturbations. Comprehensive ablation studies and cross-dataset evaluations validate strong generalization across diverse scanners, protocols, and anatomical domains.
π Abstract
Medical imaging is critical for diagnostics, but clinical adoption of advanced AI-driven imaging faces challenges due to patient variability, image artifacts, and limited model generalization. While deep learning has transformed image analysis, 3D medical imaging still suffers from data scarcity and inconsistencies due to acquisition protocols, scanner differences, and patient motion. Traditional augmentation uses a single pipeline for all transformations, disregarding the unique traits of each augmentation and struggling with large data volumes. To address these challenges, we propose a Multi-encoder Augmentation-Aware Learning (MEAL) framework that leverages four distinct augmentation variants processed through dedicated encoders. Three fusion strategies such as concatenation (CC), fusion layer (FL), and adaptive controller block (BD) are integrated to build multi-encoder models that combine augmentation-specific features before decoding. MEAL-BD uniquely preserves augmentation-aware representations, enabling robust, protocol-invariant feature learning. As demonstrated in a Computed Tomography (CT)-to-T1-weighted Magnetic Resonance Imaging (MRI) translation study, MEAL-BD consistently achieved the best performance on both unseen- and predefined-test data. On both geometric transformations (like rotations and flips) and non-augmented inputs, MEAL-BD outperformed other competing methods, achieving higher mean peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) scores. These results establish MEAL as a reliable framework for preserving structural fidelity and generalizing across clinically relevant variability. By reframing augmentation as a source of diverse, generalizable features, MEAL supports robust, protocol-invariant learning, advancing clinically reliable medical imaging solutions.