🤖 AI Summary
Existing MRI inpainting methods struggle with anatomical plausibility and visual fidelity when reconstructing large tumor regions, thereby compromising the reliability of downstream clinical analysis tools (e.g., segmentation and registration) in pathological brains. To address this, we propose a novel framework integrating multi-model ensemble learning with lightweight U-Net–based post-processing. First, multiple state-of-the-art inpainting models are ensembled to enhance robustness; then, median filtering, histogram matching, pixel-wise averaging, and U-Net–driven refinement are jointly applied to improve both anatomical consistency and textural detail in lesion areas. Our method breaks through performance saturation limits, achieving new state-of-the-art accuracy and generalization on the 2025 BraTS Challenge. Quantitative evaluations confirm significant improvements in anatomical plausibility and clinical utility. The framework is publicly released as a Docker image to facilitate rapid deployment in both clinical and research settings.
📝 Abstract
Magnetic Resonance Imaging (MRI) is the primary imaging modality used in the diagnosis, assessment, and treatment planning for brain pathologies. However, most automated MRI analysis tools, such as segmentation and registration pipelines, are optimized for healthy anatomies and often fail when confronted with large lesions such as tumors. To overcome this, image inpainting techniques aim to locally synthesize healthy brain tissues in tumor regions, enabling the reliable application of general-purpose tools. In this work, we systematically evaluate state-of-the-art inpainting models and observe a saturation in their standalone performance. In response, we introduce a methodology combining model ensembling with efficient post-processing strategies such as median filtering, histogram matching, and pixel averaging. Further anatomical refinement is achieved via a lightweight U-Net enhancement stage. Comprehensive evaluation demonstrates that our proposed pipeline improves the anatomical plausibility and visual fidelity of inpainted regions, yielding higher accuracy and more robust outcomes than individual baseline models. By combining established models with targeted post-processing, we achieve improved and more accessible inpainting outcomes, supporting broader clinical deployment and sustainable, resource-conscious research. Our 2025 BraTS inpainting docker is available at https://hub.docker.com/layers/aparida12/brats2025/inpt.