Validation of a CT-brain analysis tool for measuring global cortical atrophy in older patient cohorts

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the time-consuming reliance on manual visual rating for quantifying cerebral atrophy in clinical CT imaging, this study developed and validated a fully automated deep learning model for precise, whole-brain cortical atrophy quantification in older adults. Trained and tested on non-contrast head CT scans from 864 real-world elderly patients—including those with cognitive impairment, stroke, and healthy controls—the model operates without human intervention. Results demonstrate strong agreement with expert visual ratings (mean absolute error = 3.2; weighted kappa = 0.82), significantly outperforming inter-rater reliability (p < 0.001); moreover, predicted atrophy scores correlate significantly with age and cognitive function (p < 0.001). The model exhibits high robustness and clinical deployability, establishing the first automated, interpretable, and real-time diagnostic paradigm for routine CT-based early screening of neurodegenerative diseases.

Technology Category

Application Category

📝 Abstract
Quantification of brain atrophy currently requires visual rating scales which are time consuming and automated brain image analysis is warranted. We validated our automated deep learning (DL) tool measuring the Global Cerebral Atrophy (GCA) score against trained human raters, and associations with age and cognitive impairment, in representative older (>65 years) patients. CT-brain scans were obtained from patients in acute medicine (ORCHARD-EPR), acute stroke (OCS studies) and a legacy sample. Scans were divided in a 60/20/20 ratio for training, optimisation and testing. CT-images were assessed by two trained raters (rater-1=864 scans, rater-2=20 scans). Agreement between DL tool-predicted GCA scores (range 0-39) and the visual ratings was evaluated using mean absolute error (MAE) and Cohen's weighted kappa. Among 864 scans (ORCHARD-EPR=578, OCS=200, legacy scans=86), MAE between the DL tool and rater-1 GCA scores was 3.2 overall, 3.1 for ORCHARD-EPR, 3.3 for OCS and 2.6 for the legacy scans and half had DL-predicted GCA error between -2 and 2. Inter-rater agreement was Kappa=0.45 between the DL-tool and rater-1, and 0.41 between the tool and rater- 2 whereas it was lower at 0.28 for rater-1 and rater-2. There was no difference in GCA scores from the DL-tool and the two raters (one-way ANOVA, p=0.35) or in mean GCA scores between the DL-tool and rater-1 (paired t-test, t=-0.43, p=0.66), the tool and rater-2 (t=1.35, p=0.18) or between rater-1 and rater-2 (t=0.99, p=0.32). DL-tool GCA scores correlated with age and cognitive scores (both p<0.001). Our DL CT-brain analysis tool measured GCA score accurately and without user input in real-world scans acquired from older patients. Our tool will enable extraction of standardised quantitative measures of atrophy at scale for use in health data research and will act as proof-of-concept towards a point-of-care clinically approved tool.
Problem

Research questions and friction points this paper is trying to address.

Validating automated deep learning tool for brain atrophy measurement
Replacing time-consuming visual rating scales with automated analysis
Assessing tool accuracy against human raters and clinical correlations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning tool automates brain atrophy measurement
Validated against human raters using CT-brain scans
Provides standardized quantitative atrophy measures without input
🔎 Similar Papers
No similar papers found.
S
Sukhdeep Bal
Wolfson Centre for Prevention of Stroke and Dementia, Wolfson Building, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.
E
Emma Colbourne
Wolfson Centre for Prevention of Stroke and Dementia, Wolfson Building, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.
J
Jasmine Gan
Wolfson Centre for Prevention of Stroke and Dementia, Wolfson Building, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.
L
Ludovica Griffanti
Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, FMRIB Building, John Radcliffe Hospital, Headley Way, Headington, Oxford, OX3 9DU, UK.
T
Taylor Hanayik
Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, FMRIB Building, John Radcliffe Hospital, Headley Way, Headington, Oxford, OX3 9DU, UK.
N
Nele Demeyere
Nuffield Department of Clinical Neurosciences, University of Oxford, Level 6, West Wing, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.
Jim Davies
Jim Davies
University of Oxford
computer sciencemedicinegovernancesoftware
S
Sarah T Pendlebury
Wolfson Centre for Prevention of Stroke and Dementia, Wolfson Building, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.; Departments of Acute General (Internal) Medicine and Geratology, Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.; NIHR Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Headley Way, Oxford, Oxfords
Mark Jenkinson
Mark Jenkinson
Professor of Neuroimaging
medical image analysisneuroimagingdeep learning