Beyond Plain Demos: A Demo-centric Anchoring Paradigm for In-Context Learning in Alzheimer's Disease Detection

๐Ÿ“… 2025-11-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Alzheimerโ€™s disease (AD) narrative transcription faces two key challenges: (1) task-cognitive gaps arising from out-of-distribution pretraining, and (2) highly homogeneous transcription contexts, severely limiting contextual awareness in in-context learning (ICL). To address these, we propose DA4ICLโ€”a novel framework introducing Diversified Contrastive Retrieval (DCR) and Progressive Vector Anchoring (PVA). DCR enhances semantic and discriminative diversity within the demonstration set, while PVA injects learnable layer-wise anchor vectors into Transformer layers to explicitly strengthen fine-grained contextual awareness. Evaluated on three low-resource, out-of-distribution AD benchmarks, DA4ICL significantly outperforms conventional ICL and task-vector approaches. Results demonstrate its robustness for AD identification in highly homogeneous clinical narratives, establishing a new paradigm for few-shot, high-homogeneity NLP tasks in healthcare.

Technology Category

Application Category

๐Ÿ“ Abstract
Detecting Alzheimer's disease (AD) from narrative transcripts challenges large language models (LLMs): pre-training rarely covers this out-of-distribution task, and all transcript demos describe the same scene, producing highly homogeneous contexts. These factors cripple both the model's built-in task knowledge ( extbf{task cognition}) and its ability to surface subtle, class-discriminative cues ( extbf{contextual perception}). Because cognition is fixed after pre-training, improving in-context learning (ICL) for AD detection hinges on enriching perception through better demonstration (demo) sets. We demonstrate that standard ICL quickly saturates, its demos lack diversity (context width) and fail to convey fine-grained signals (context depth), and that recent task vector (TV) approaches improve broad task adaptation by injecting TV into the LLMs'hidden states (HSs), they are ill-suited for AD detection due to the mismatch of injection granularity, strength and position. To address these bottlenecks, we introduce extbf{DA4ICL}, a demo-centric anchoring framework that jointly expands context width via emph{ extbf{Diverse and Contrastive Retrieval}} (DCR) and deepens each demo's signal via emph{ extbf{Projected Vector Anchoring}} (PVA) at every Transformer layer. Across three AD benchmarks, DA4ICL achieves large, stable gains over both ICL and TV baselines, charting a new paradigm for fine-grained, OOD and low-resource LLM adaptation.
Problem

Research questions and friction points this paper is trying to address.

Detecting Alzheimer's disease from narrative transcripts challenges LLMs
Standard ICL lacks demo diversity and fails to convey fine-grained signals
Task vector approaches are ill-suited due to injection granularity mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse and Contrastive Retrieval expands context width
Projected Vector Anchoring deepens demo signals
Framework jointly enhances context width and depth
๐Ÿ”Ž Similar Papers
No similar papers found.
P
Puzhen Su
College of Computer Science and Technology, National University of Defense Technology, Changsha, China
Haoran Yin
Haoran Yin
Leiden University
Y
Yongzhu Miao
College of Computer Science and Technology, National University of Defense Technology, Changsha, China
Jintao Tang
Jintao Tang
National University of Defense Technology
natural language processing
S
Shasha Li
College of Computer Science and Technology, National University of Defense Technology, Changsha, China
T
Ting Wang
College of Computer Science and Technology, National University of Defense Technology, Changsha, China