A Contrast-Agnostic Method for Ultra-High Resolution Claustrum Segmentation

📅 2024-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic segmentation of the claustrum in high-resolution MRI is challenging due to its low intrinsic contrast and small anatomical size. Method: We propose a contrast- and resolution-agnostic, synthetic-data-driven segmentation framework built upon SynthSeg. During training, multi-contrast (T1/T2/PD/qT1) and multi-resolution (0.35 mm isotropic to conventional voxel sizes) synthetic images are generated online; only 18 ultra-high-resolution (primarily ex vivo) manually annotated volumes are required—no real-intensity images are used. Results: Our method achieves robust claustrum segmentation across both ultra-high-resolution (0.35 mm) and conventional in vivo scans: Dice = 0.632, ASD = 0.458 mm, VS = 0.867. It generalizes effectively across T1/T2/PD/qT1 contrasts and demonstrates strong test–retest reliability. The core innovation lies in decoupling structural visibility from imaging parameters, thereby overcoming the fundamental low signal-to-noise ratio limitation inherent to small subcortical structures.

Technology Category

Application Category

📝 Abstract
The claustrum is a band-like gray matter structure located between putamen and insula whose exact functions are still actively researched. Its sheet-like structure makes it barely visible in in vivo Magnetic Resonance Imaging (MRI) scans at typical resolutions and neuroimaging tools for its study, including methods for automatic segmentation, are currently very limited. In this paper, we propose a contrast- and resolution-agnostic method for claustrum segmentation at ultra-high resolution (0.35 mm isotropic); the method is based on the SynthSeg segmentation framework (Billot et al., 2023), which leverages the use of synthetic training intensity images to achieve excellent generalization. In particular, SynthSeg requires only label maps to be trained, since corresponding intensity images are synthesized on the fly with random contrast and resolution. We trained a deep learning network for automatic claustrum segmentation, using claustrum manual labels obtained from 18 ultra-high resolution MRI scans (mostly ex vivo). We demonstrated the method to work on these 18 high resolution cases (Dice score = 0.632, mean surface distance = 0.458 mm, and volumetric similarity = 0.867 using 6-fold Cross Validation (CV)), and also on in vivo T1-weighted MRI scans at typical resolutions (~1 mm isotropic). We also demonstrated that the method is robust in a test-retest setting and when applied to multimodal imaging (T2-weighted, Proton Density and quantitative T1 scans). To the best of our knowledge this is the first accurate method for automatic ultra-high resolution claustrum segmentation, which is robust against changes in contrast and resolution. The method is released at https://github.com/chiara-mauri/claustrum_segmentation and as part of the neuroimaging package Freesurfer (Fischl, 2012).
Problem

Research questions and friction points this paper is trying to address.

Develops contrast-agnostic method for claustrum segmentation in ultra-high resolution MRI
Addresses limited tools for automatic claustrum segmentation in neuroimaging
Ensures robustness across varying MRI contrasts and resolutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrast-agnostic ultra-high resolution segmentation
SynthSeg framework with synthetic training images
Deep learning trained on manual ex vivo labels
🔎 Similar Papers
No similar papers found.
C
Chiara Mauri
Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA USA
R
Ryan Fritz
Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
J
Jocelyn Mora
Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
Benjamin Billot
Benjamin Billot
Researcher, Inria
medical image analysisimage segmentationdeep learning
J
J. E. Iglesias
Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA USA; UCL Centre for Medical Image Computing, London, United Kingdom
K
K. V. Leemput
Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
J
J. Augustinack
Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
D
Douglas N. Greve
Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA