๐ค AI Summary
Alzheimerโs disease (AD) narrative transcription faces two key challenges: (1) task-cognitive gaps arising from out-of-distribution pretraining, and (2) highly homogeneous transcription contexts, severely limiting contextual awareness in in-context learning (ICL). To address these, we propose DA4ICLโa novel framework introducing Diversified Contrastive Retrieval (DCR) and Progressive Vector Anchoring (PVA). DCR enhances semantic and discriminative diversity within the demonstration set, while PVA injects learnable layer-wise anchor vectors into Transformer layers to explicitly strengthen fine-grained contextual awareness. Evaluated on three low-resource, out-of-distribution AD benchmarks, DA4ICL significantly outperforms conventional ICL and task-vector approaches. Results demonstrate its robustness for AD identification in highly homogeneous clinical narratives, establishing a new paradigm for few-shot, high-homogeneity NLP tasks in healthcare.
๐ Abstract
Detecting Alzheimer's disease (AD) from narrative transcripts challenges large language models (LLMs): pre-training rarely covers this out-of-distribution task, and all transcript demos describe the same scene, producing highly homogeneous contexts. These factors cripple both the model's built-in task knowledge ( extbf{task cognition}) and its ability to surface subtle, class-discriminative cues ( extbf{contextual perception}). Because cognition is fixed after pre-training, improving in-context learning (ICL) for AD detection hinges on enriching perception through better demonstration (demo) sets. We demonstrate that standard ICL quickly saturates, its demos lack diversity (context width) and fail to convey fine-grained signals (context depth), and that recent task vector (TV) approaches improve broad task adaptation by injecting TV into the LLMs'hidden states (HSs), they are ill-suited for AD detection due to the mismatch of injection granularity, strength and position. To address these bottlenecks, we introduce extbf{DA4ICL}, a demo-centric anchoring framework that jointly expands context width via emph{ extbf{Diverse and Contrastive Retrieval}} (DCR) and deepens each demo's signal via emph{ extbf{Projected Vector Anchoring}} (PVA) at every Transformer layer. Across three AD benchmarks, DA4ICL achieves large, stable gains over both ICL and TV baselines, charting a new paradigm for fine-grained, OOD and low-resource LLM adaptation.