$ exttt{LucidAtlas}$: Learning Uncertainty-Aware, Covariate-Disentangled, Individualized Atlas Representations

πŸ“… 2025-02-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the modeling challenge posed by intertwined spatial heterogeneity, covariate confounding, and population-level uncertainty in high-dimensional medical data. We propose the first spatial atlas learning framework jointly driven by covariate disentanglement and uncertainty awareness. Methodologically, it integrates neural additive models, Bayesian uncertainty quantification, and prior-guided atlas optimization to explicitly disentangle covariate effects; additionally, a marginalization-based interpretability mechanism is introduced to quantitatively attribute covariate-specific modulation of atlas topology. Extensive evaluation on two real-world medical datasets demonstrates the framework’s capabilities in personalized prediction, interpretable covariate attribution, dynamic population trend modeling, and calibrated uncertainty quantification. The implementation code will be made publicly available.

Technology Category

Application Category

πŸ“ Abstract
The goal of this work is to develop principled techniques to extract information from high dimensional data sets with complex dependencies in areas such as medicine that can provide insight into individual as well as population level variation. We develop $ exttt{LucidAtlas}$, an approach that can represent spatially varying information, and can capture the influence of covariates as well as population uncertainty. As a versatile atlas representation, $ exttt{LucidAtlas}$ offers robust capabilities for covariate interpretation, individualized prediction, population trend analysis, and uncertainty estimation, with the flexibility to incorporate prior knowledge. Additionally, we discuss the trustworthiness and potential risks of neural additive models for analyzing dependent covariates and then introduce a marginalization approach to explain the dependence of an individual predictor on the models' response (the atlas). To validate our method, we demonstrate its generalizability on two medical datasets. Our findings underscore the critical role of by-construction interpretable models in advancing scientific discovery. Our code will be publicly available upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Extract insights from high-dimensional medical datasets
Represent spatially varying data with covariate influence
Ensure interpretable models for scientific discovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-aware data representation
Covariate-disentangled atlas modeling
Individualized prediction and analysis
πŸ”Ž Similar Papers
No similar papers found.
Yining Jiao
Yining Jiao
UNC-Chapel Hill
Geometry ModelingShape AnalysisAI4Science
S
S. Bhamidi
UNC-Chapel Hill
Huaizhi Qu
Huaizhi Qu
UNC Chapel Hill, University of Science and Technology of China
LLMMultimodal LLM3D VisionAI for Science
C
C. Zdanski
UNC-Chapel Hill
J
Julia Kimbell
UNC-Chapel Hill
A
Andrew Prince
UNC-Chapel Hill
C
Cameron Worden
UNC-Chapel Hill
S
Samuel Kirse
UNC-Chapel Hill
C
Christopher Rutter
UNC-Chapel Hill
B
Benjamin H. Shields
UNC-Chapel Hill
J
Jisan Mahmud
UNC-Chapel Hill
Tianlong Chen
Tianlong Chen
Assistant Professor, CS@UNC Chapel Hill; Chief AI Scientist, hireEZ
Machine LearningAI4ScienceComputer VisionSparsity
Marc Niethammer
Marc Niethammer
Professor of Computer Science, UC San Diego
medical image analysismachine learningimage registration