Dementia Through Different Eyes: Explainable Modeling of Human and LLM Perceptions for Early Awareness

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates perceptual discrepancies between laypersons (e.g., family caregivers) and large language models (LLMs) in detecting early dementia from linguistic descriptions—and their respective alignment with clinical diagnoses. Method: Leveraging image-description texts, we propose an interpretable cross-subject perception modeling framework: it integrates expert-defined linguistic feature engineering, LLM-based feature distillation, logistic regression, and expert-guided feature extraction. Contribution/Results: Results show that LLM perception aligns more closely with clinical patterns and captures richer linguistic features, whereas human judgments are highly susceptible to contextual bias and exhibit high false-negative rates; both exhibit significant diagnostic biases. To our knowledge, this is the first systematic comparative analysis of human–LLM linguistic perception disparities in dementia detection. The framework substantially improves accuracy and interpretability in identifying key linguistic biomarkers—including semantic paucity and syntactic simplification—thereby establishing a novel, low-cost, deployable paradigm for early dementia screening.

Technology Category

Application Category

📝 Abstract
Cognitive decline often surfaces in language years before diagnosis. It is frequently non-experts, such as those closest to the patient, who first sense a change and raise concern. As LLMs become integrated into daily communication and used over prolonged periods, it may even be an LLM that notices something is off. But what exactly do they notice--and should be noticing--when making that judgment? This paper investigates how dementia is perceived through language by non-experts. We presented transcribed picture descriptions to non-expert humans and LLMs, asking them to intuitively judge whether each text was produced by someone healthy or with dementia. We introduce an explainable method that uses LLMs to extract high-level, expert-guided features representing these picture descriptions, and use logistic regression to model human and LLM perceptions and compare with clinical diagnoses. Our analysis reveals that human perception of dementia is inconsistent and relies on a narrow, and sometimes misleading, set of cues. LLMs, by contrast, draw on a richer, more nuanced feature set that aligns more closely with clinical patterns. Still, both groups show a tendency toward false negatives, frequently overlooking dementia cases. Through our interpretable framework and the insights it provides, we hope to help non-experts better recognize the linguistic signs that matter.
Problem

Research questions and friction points this paper is trying to address.

Investigates non-expert human and LLM perception of dementia through language
Compares intuitive judgments of dementia with clinical diagnoses using explainable methods
Aims to improve non-expert recognition of linguistic signs of dementia
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs extract expert-guided linguistic features
Logistic regression models human and LLM perceptions
Explainable framework compares perceptions with clinical diagnoses
🔎 Similar Papers
No similar papers found.