AI analysis of medical images at scale as a health disparities probe: a feasibility demonstration using chest radiographs

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional epidemiological data often lack granularity and scalability for quantifying health disparities, particularly in underserved populations. Method: We propose imaging-derived Health Disparity Indices (iHDIs), which transform chest X-rays into objective, quantitative proxies for health equity assessment. First, pretrained CNNs (e.g., DenseNet, ResNet) regress pulmonary parenchymal lesion severity to derive imaging phenotypes. Unsupervised clustering (K-means/DBSCAN) groups these phenotypes, which are then integrated with social determinants of health (SDOH)—including sex and race—using four complementary disparity metrics: variance, disparity index, Theil index, and mean log deviation. Contribution/Results: Evaluated on 1,571 clinical cases, iHDIs yield numerically sound, interpretable values and robustly detect systematic, sex- and race-associated differences in imaging phenotypes. This demonstrates that routine medical imaging can serve as a scalable, non-invasive, and high-throughput data source for population-level health disparity research.

Technology Category

Application Category

📝 Abstract
Health disparities (differences in non-genetic conditions that influence health) can be associated with differences in burden of disease by groups within a population. Social determinants of health (SDOH) are domains such as health care access, dietary access, and economics frequently studied for potential association with health disparities. Evaluating SDOH-related phenotypes using routine medical images as data sources may enhance health disparities research. We developed a pipeline for using quantitative measures automatically extracted from medical images as inputs into health disparities index calculations. Our study focused on the use case of two SDOH demographic correlates (sex and race) and data extracted from chest radiographs of 1,571 unique patients. The likelihood of severe disease within the lung parenchyma from each image type, measured using an established deep learning model, was merged into a single numerical image-based phenotype for each patient. Patients were then separated into phenogroups by unsupervised clustering of the image-based phenotypes. The health rate for each phenogroup was defined as the median image-based phenotype for each SDOH used as inputs to four imaging-derived health disparities indices (iHDIs): one absolute measure (between-group variance) and three relative measures (index of disparity, Theil index, and mean log deviation). The iHDI measures demonstrated feasible values for each SDOH demographic correlate, showing potential for medical images to serve as a novel probe for health disparities. Large-scale AI analysis of medical images can serve as a probe for a novel data source for health disparities research.
Problem

Research questions and friction points this paper is trying to address.

Using AI to analyze medical images for health disparities research
Developing a pipeline for image-based health disparity index calculations
Demonstrating feasibility of chest radiographs as a novel data source
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI extracts quantitative measures from medical images
Deep learning models assess severe disease likelihood
Unsupervised clustering groups patients by image phenotypes
🔎 Similar Papers
No similar papers found.
H
Heather M. Whitney
The University of Chicago, Department of Radiology, Chicago, IL, USA
H
Hui Li
The University of Chicago, Department of Radiology, Chicago, IL, USA
Karen Drukker
Karen Drukker
Research Associate Professor University of Chicago
medical imagingradiomicsdeep learning
E
Elbert Huang
The University of Chicago, Department of Medicine, Chicago, IL, USA
M
Maryellen L. Giger
The University of Chicago, Department of Radiology, Chicago, IL, USA