Advances in LLM Reasoning Enable Flexibility in Clinical Problem-Solving

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit cognitive flexibility in clinical reasoning, particularly their ability to avoid heuristic traps when confronted with medical questions designed to induce stereotyped thinking. Leveraging the Medical Abstraction and Reasoning Corpus (mARC), we constructed an adversarial medical question-answering task incorporating the Einstellung effect to systematically evaluate leading strong-reasoning LLMs, including those from OpenAI, Grok, Gemini, Claude, and DeepSeek. Results demonstrate that strong-reasoning models achieve human-level cognitive flexibility on mARC, correctly answering 55%–70% of questions—those most frequently missed by physicians—with high confidence, significantly outperforming weak-reasoning counterparts. This work provides the first empirical validation of LLMs’ robust resistance to the Einstellung effect in complex clinical reasoning scenarios.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved high accuracy on medical question-answer (QA) benchmarks, yet their capacity for flexible clinical reasoning has been debated. Here, we asked whether advances in reasoning LLMs improve their cognitive flexibility in clinical reasoning. We assessed reasoning models from the OpenAI, Grok, Gemini, Claude, and DeepSeek families on the medicine abstraction and reasoning corpus (mARC), an adversarial medical QA benchmark which utilizes the Einstellung effect to induce inflexible overreliance on learned heuristic patterns in contexts where they become suboptimal. We found that strong reasoning models avoided Einstellung-based traps more often than weaker reasoning models, achieving human-level performance on mARC. On questions most commonly missed by physicians, the top 5 performing models answered 55% to 70% correctly with high confidence, indicating that these models may be less susceptible than humans to Einstellung effects. Our results indicate that strong reasoning models demonstrate improved flexibility in medical reasoning, achieving performance on par with humans on mARC.
Problem

Research questions and friction points this paper is trying to address.

clinical reasoning
cognitive flexibility
Einstellung effect
medical QA
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

cognitive flexibility
Einstellung effect
medical reasoning
large language models
adversarial benchmark
🔎 Similar Papers
No similar papers found.
K
Kie Shidara
Weill Institute of Neurology and Neurosciences, University of California, San Francisco
P
Preethi Prem
Carle Illinois College of Medicine, University of Illinois Urbana-Champaign
J
Jonathan W. Kim
Department of Neurology and Neurological Sciences, Stanford University
A
Anna Podlasek
Image Guided Therapy Research Facility, University of Dundee
Feng Liu
Feng Liu
Stevens Institute of Technology
EEG source imagingBrain NetworksDynamic SystemEpilepsyMental Disorder
A
Ahmed Alaa
Department of EECS, University of California Berkeley
Danilo Bernardo
Danilo Bernardo
University of California, San Francisco
EpilepsyPediatric Epilepsy