🤖 AI Summary
To address the longstanding trade-off among accuracy, inference speed, and deployment flexibility in skull-stripping for neuroimaging, we propose MindGrab—a parameter- and memory-efficient, modality-agnostic 3D fully convolutional network. Its core innovation lies in a novel spectral-analysis-guided dilated convolution architecture, coupled with a cross-modal training paradigm using exclusively synthetic data—enabling robust generalization without any real-world annotated labels. MindGrab reduces model parameters by 95%, cuts GPU memory consumption by 50%, and accelerates inference by over 2× compared to state-of-the-art methods; on resource-constrained devices, it achieves 10–30× speedup. Quantitatively, it attains a mean Dice coefficient of 95.9±1.6%, matching SynthStrip’s performance. The framework supports lightweight deployment via both command-line interface and web browser, demonstrating strong practicality for clinical and edge-computing scenarios.
📝 Abstract
We developed MindGrab, a parameter- and memory-efficient deep fully-convolutional model for volumetric skull-stripping in head images of any modality. Its architecture, informed by a spectral interpretation of dilated convolutions, was trained exclusively on modality-agnostic synthetic data. MindGrab was evaluated on a retrospective dataset of 606 multimodal adult-brain scans (T1, T2, DWI, MRA, PDw MRI, EPI, CT, PET) sourced from the SynthStrip dataset. Performance was benchmarked against SynthStrip, ROBEX, and BET using Dice scores, with Wilcoxon signed-rank significance tests. MindGrab achieved a mean Dice score of 95.9 with standard deviation (SD) 1.6 across modalities, significantly outperforming classical methods (ROBEX: 89.1 SD 7.7, P<0.05; BET: 85.2 SD 14.4, P<0.05). Compared to SynthStrip (96.5 SD 1.1, P=0.0352), MindGrab delivered equivalent or superior performance in nearly half of the tested scenarios, with minor differences (<3% Dice) in the others. MindGrab utilized 95% fewer parameters (146,237 vs. 2,566,561) than SynthStrip. This efficiency yielded at least 2x faster inference, 50% lower memory usage on GPUs, and enabled exceptional performance (e.g., 10-30x speedup, and up to 30x memory reduction) and accessibility on a wider range of hardware, including systems without high-end GPUs. MindGrab delivers state-of-the-art accuracy with dramatically lower resource demands, supported in brainchop-cli (https://pypi.org/project/brainchop/) and at brainchop.org.