Algorithms Trained on Normal Chest X-rays Can Predict Health Insurance Types

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges the conventional assumption that medical imaging data reflect purely biological phenomena by demonstrating that deep learning models can infer patients’ insurance type—a proxy for socioeconomic status—from routine, normal chest X-rays. Method: Using state-of-the-art architectures—including DenseNet121, SwinV2-B, and MedMamba—we trained and evaluated models on the MIMIC-CXR-JPG and CheXpert datasets. We controlled rigorously for age, sex, race, and other clinical covariates to isolate socioeconomic signal. Patch occlusion analysis localized predictive features across lung fields and mediastinum. Results: Models achieved AUCs of 0.67 and 0.68 on the two datasets, respectively, with statistically significant performance even after covariate adjustment. The distributed, anatomy-agnostic signal—termed a “social fingerprint”—reflects systemic sociotechnical disparities embedded in imaging equipment, acquisition protocols, and clinical workflows. This work provides the first empirical evidence that structural social bias is encoded diffusely in routinely acquired negative radiographs, establishing a novel paradigm and methodological foundation for auditing implicit inequities in medical AI systems.

Technology Category

Application Category

📝 Abstract
Artificial intelligence is revealing what medicine never intended to encode. Deep vision models, trained on chest X-rays, can now detect not only disease but also invisible traces of social inequality. In this study, we show that state-of-the-art architectures (DenseNet121, SwinV2-B, MedMamba) can predict a patient's health insurance type, a strong proxy for socioeconomic status, from normal chest X-rays with significant accuracy (AUC around 0.67 on MIMIC-CXR-JPG, 0.68 on CheXpert). The signal persists even when age, race, and sex are controlled for, and remains detectable when the model is trained exclusively on a single racial group. Patch-based occlusion reveals that the signal is diffuse rather than localized, embedded in the upper and mid-thoracic regions. This suggests that deep networks may be internalizing subtle traces of clinical environments, equipment differences, or care pathways; learning socioeconomic segregation itself. These findings challenge the assumption that medical images are neutral biological data. By uncovering how models perceive and exploit these hidden social signatures, this work reframes fairness in medical AI: the goal is no longer only to balance datasets or adjust thresholds, but to interrogate and disentangle the social fingerprints embedded in clinical data itself.
Problem

Research questions and friction points this paper is trying to address.

AI models predict health insurance types from normal chest X-rays
Deep networks detect socioeconomic status embedded in medical images
Medical AI fairness requires disentangling social fingerprints in data
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI predicts health insurance from chest X-rays
Deep vision models detect socioeconomic status signals
Models learn diffuse social fingerprints in medical images
🔎 Similar Papers
No similar papers found.
C
Chi-Yu Chen
National Taiwan University Hospital, Taiwan
R
Rawan Abulibdeh
University of Toronto, Canada
A
Arash Asgari
York University, Canada
Leo Anthony Celi
Leo Anthony Celi
Massachusetts Institute of Technology
D
Deirdre Goode
Mass General Brigham, USA
H
Hassan Hamidi
York University, Canada
Laleh Seyyed-Kalantari
Laleh Seyyed-Kalantari
Assistant Professor, York University, Vector Institute
Responsible AIGenerative AIFoundation ModelsAI in medical imagingAI fairness
Po-Chih Kuo
Po-Chih Kuo
National Tsing Hua University
Machine learningMedical image analysisBiomedical signal processing
N
Ned McCague
MIT, USA
T
Thomas Sounack
Dana-Farber Cancer Institute, USA