Data-Driven Abdominal Phenotypes of Type 2 Diabetes in Lean, Overweight, and Obese Cohorts

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High BMI is neither sufficient nor necessary for type 2 diabetes (T2D), as lean T2D and obese non-T2D individuals coexist—suggesting that abdominal body composition heterogeneity may underlie divergent disease phenotypes. Method: Leveraging large-scale clinical abdominal CT scans, we developed an AI-driven, interpretable analytical framework integrating 3D organ segmentation, random forest classification, SHAP-based feature attribution, and unsupervised clustering to quantify abdominal structural features—including pancreatic volume, intramuscular adipose tissue infiltration, and visceral/subcutaneous fat distribution. Contribution/Results: We identified weight-invariant, T2D-associated abdominal phenotypes across lean, overweight, and obese cohorts, revealing pancreatic atrophy and intramuscular lipid deposition as shared pathophysiological mechanisms. The model achieved AUCs of 0.72–0.74; among the top 20 predictive features, 14–18 were statistically significant, confirming spatial fat distribution and pancreatic morphology as core, interpretable determinants of T2D risk.

Technology Category

Application Category

📝 Abstract
Purpose: Although elevated BMI is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that detailed body composition may uncover abdominal phenotypes of type 2 diabetes. With AI, we can now extract detailed measurements of size, shape, and fat content from abdominal structures in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data. Approach: To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort (n = 1,728) and once on lean (n = 497), overweight (n = 611), and obese (n = 620) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements through segmentation, classifies type 2 diabetes through a cross-validated random forest, measures how features contribute to model-estimated risk or protection through SHAP analysis, groups scans by shared model decision patterns (clustering from SHAP) and links back to anatomical differences (classification). Results: The random-forests achieved mean AUCs of 0.72-0.74. There were shared type 2 diabetes signatures in each group; fatty skeletal muscle, older age, greater visceral and subcutaneous fat, and a smaller or fat-laden pancreas. Univariate logistic regression confirmed the direction of 14-18 of the top 20 predictors within each subgroup (p < 0.05). Conclusions: Our findings suggest that abdominal drivers of type 2 diabetes may be consistent across weight classes.
Problem

Research questions and friction points this paper is trying to address.

Identify abdominal phenotypes linked to type 2 diabetes across BMI groups
Use AI to analyze 3D imaging for body composition signatures
Discover consistent abdominal drivers of diabetes in lean and obese individuals
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI extracts 3D body composition from CT scans
Random forest classifies diabetes with SHAP analysis
Clustering reveals shared diabetes signatures across BMI
🔎 Similar Papers
No similar papers found.
Lucas W. Remedios
Lucas W. Remedios
Vanderbilt University
medical imagingdeep learningcomputer vision
C
Chloe Choe
Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, USA
T
Trent M. Schwartz
Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, USA
D
Dingjie Su
Vanderbilt University, Department of Computer Science, Nashville, USA
Gaurav Rudravaram
Gaurav Rudravaram
Research Assistant
Deep LearningAIHistologyConnectomicsDiffusion
Chenyu Gao
Chenyu Gao
Electrical and Computer Engineering, Vanderbilt University
Medical Image AnalysisComputer Vision
A
Aravind R. Krishnan
Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, USA
Adam M. Saunders
Adam M. Saunders
PhD Student, Vanderbilt University
medical imagingdeep learningmagnetic resonance imaging
M
Michael E. Kim
Vanderbilt University, Department of Computer Science, Nashville, USA
S
Shunxing Bao
Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, USA
Alvin C. Powers
Alvin C. Powers
Vanderbilt University
B
Bennett A. Landman
Vanderbilt University, Department of Computer Science, Nashville, USA; Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, USA; Vanderbilt University, Department of Biomedical Engineering, Nashville, USA
J
John Virostko
University of Texas at Austin, Department of Diagnostic Medicine, Dell Medical School, Austin, USA; University of Texas at Austin, Livestrong Cancer Institutes, Dell Medical School, Austin, USA; University of Texas at Austin, Austin, Department of Oncology, Dell Medical School, USA; University of Texas at Austin, Oden Institute for Computational Engineering and Sciences, Austin, USA