Foundation Artificial Intelligence Models for Health Recognition Using Face Photographs (FAHR-Face)

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates facial images as non-invasive health biomarkers for two clinical tasks: biological age estimation and survival risk prediction in cancer patients. We propose the first unified health foundation model pretrained on 40 million facial images, enabling dual-task adaptation via a single architecture—without requiring additional annotations. Our method employs a two-stage age-balanced fine-tuning strategy coupled with Cox proportional hazards survival analysis to decouple tasks effectively. We further introduce a novel anatomical attribution analysis, revealing task-specific facial region contributions to aging and disease risk, and provide visual interpretability via saliency maps. The model achieves a mean absolute error of 5.1 years in biological age estimation and yields a hazard ratio of 3.22 (P < 0.001) between the highest- and lowest-risk quartiles in survival prediction. Extensive validation across diverse age groups, sexes, racial backgrounds, and cancer types confirms strong generalizability and robustness.

Technology Category

Application Category

📝 Abstract
Background: Facial appearance offers a noninvasive window into health. We built FAHR-Face, a foundation model trained on>40 million facial images and fine-tuned it for two distinct tasks: biological age estimation (FAHR-FaceAge) and survival risk prediction (FAHR-FaceSurvival). Methods: FAHR-FaceAge underwent a two-stage, age-balanced fine-tuning on 749,935 public images; FAHR-FaceSurvival was fine-tuned on 34,389 photos of cancer patients. Model robustness (cosmetic surgery, makeup, pose, lighting) and independence (saliency mapping) was tested extensively. Both models were clinically tested in two independent cancer patient datasets with survival analyzed by multivariable Cox models and adjusted for clinical prognostic factors. Findings: For age estimation, FAHR-FaceAge had the lowest mean absolute error of 5.1 years on public datasets, outperforming benchmark models and maintaining accuracy across the full human lifespan. In cancer patients, FAHR-FaceAge outperformed a prior facial age estimation model in survival prognostication. FAHR-FaceSurvival demonstrated robust prediction of mortality, and the highest-risk quartile had more than triple the mortality of the lowest (adjusted hazard ratio 3.22; P<0.001). These findings were validated in the independent cohort and both models showed generalizability across age, sex, race and cancer subgroups. The two algorithms provided distinct, complementary prognostic information; saliency mapping revealed each model relied on distinct facial regions. The combination of FAHR-FaceAge and FAHR-FaceSurvival improved prognostic accuracy. Interpretation: A single foundation model can generate inexpensive, scalable facial biomarkers that capture both biological ageing and disease-related mortality risk. The foundation model enabled effective training using relatively small clinical datasets.
Problem

Research questions and friction points this paper is trying to address.

Estimating biological age from facial images accurately
Predicting survival risk in cancer patients using photos
Developing robust AI models for health recognition via faces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Foundation model trained on 40M facial images
Two-stage fine-tuning for age and survival prediction
Robust testing for cosmetic and clinical variations
🔎 Similar Papers
No similar papers found.
F
Fridolin Haugg
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
G
Grace Lee
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
J
John He
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
L
Leonard Nurnberg
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands; Department of Radiation Oncology (MAASTRO), Maastricht University, Maastricht, The Netherlands
D
D. Bontempi
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
D
D. Bitterman
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
P
Paul J. Catalano
Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
V
V. Prudente
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
D
Dmitrii Glubokov
Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
Andrew Warrington
Andrew Warrington
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
S
Suraj Pai
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
D
D. Ruysscher
Department of Radiation Oncology (MAASTRO), Maastricht University Medical Center, GROW School, Maastricht, The Netherlands
C
C. Guthier
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
B
B. Kann
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
V
V. Gladyshev
Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
H
Hugo J W L Aerts
Department of Radiation Oncology, Dana- Farber Cancer Institute/Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Department of Radiation Oncology (MAASTRO), Maastricht University, Maastricht, The Netherlands; Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
Raymond H. Mak
Raymond H. Mak
Associate Professor of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute
Radiation OncologyArtificial Intelligence