🤖 AI Summary
Contextual learning (ICL) for early Alzheimer’s disease (AD) language screening suffers from suboptimal demonstration selection, limiting diagnostic accuracy. Method: We propose a dynamic demonstration selection paradigm centered on Delta-KNN—a novel strategy that first computes a task-adaptive delta score to quantify each training sample’s diagnostic gain for the target input, then integrates embedding-based similarity to retrieve the most discriminative k-nearest neighbors. This approach enables end-to-end optimization within ICL frameworks—including Llama-3.1—without fine-tuning. Contribution/Results: Our method significantly outperforms existing ICL baselines on two standard AD text detection benchmarks. Notably, on Llama-3.1, it achieves higher accuracy than supervised classifiers, setting a new state-of-the-art. This is the first work to empirically validate both the feasibility and superiority of pure prompt-driven ICL in complex clinical diagnostic tasks.
📝 Abstract
Alzheimer's Disease (AD) is a progressive neurodegenerative disorder that leads to dementia, and early intervention can greatly benefit from analyzing linguistic abnormalities. In this work, we explore the potential of Large Language Models (LLMs) as health assistants for AD diagnosis from patient-generated text using in-context learning (ICL), where tasks are defined through a few input-output examples. Empirical results reveal that conventional ICL methods, such as similarity-based selection, perform poorly for AD diagnosis, likely due to the inherent complexity of this task. To address this, we introduce Delta-KNN, a novel demonstration selection strategy that enhances ICL performance. Our method leverages a delta score to assess the relative gains of each training example, coupled with a KNN-based retriever that dynamically selects optimal"representatives"for a given input. Experiments on two AD detection datasets across three open-source LLMs demonstrate that Delta-KNN consistently outperforms existing ICL baselines. Notably, when using the Llama-3.1 model, our approach achieves new state-of-the-art results, surpassing even supervised classifiers.