🤖 AI Summary
This work addresses the lack of systematic evaluation of epidemiological reasoning capabilities in existing medical question-answering benchmarks, which predominantly focus on clinical knowledge or individual-level inference. The authors propose EpiQAL, the first fine-grained diagnostic benchmark tailored for epidemiological reasoning, encompassing three task types: factual recall, multi-step reasoning, and conclusion reconstruction. The dataset is constructed and validated through an expert-designed taxonomy, cross-model verification, controlled retrieval difficulty, and Chain-of-Thought prompting. Experiments across ten open-source large language models reveal that performance is weakest on multi-step reasoning tasks, that model rankings vary significantly by task type, and that model scale is not a reliable predictor of performance. Notably, Chain-of-Thought prompting proves effective only for complex reasoning tasks, challenging the assumption that scaling alone suffices for improved performance.
📝 Abstract
Reliable epidemiological reasoning requires synthesizing study evidence to infer disease burden, transmission dynamics, and intervention effects at the population level. Existing medical question answering benchmarks primarily emphasize clinical knowledge or patient-level reasoning, yet few systematically evaluate evidence-grounded epidemiological inference. We present EpiQAL, the first diagnostic benchmark for epidemiological question answering across diverse diseases, comprising three subsets built from open-access literature. The subsets respectively evaluate text-grounded factual recall, multi-step inference linking document evidence with epidemiological principles, and conclusion reconstruction with the Discussion section withheld. Construction combines expert-designed taxonomy guidance, multi-model verification, and retrieval-based difficulty control. Experiments on ten open models reveal that current LLMs show limited performance on epidemiological reasoning, with multi-step inference posing the greatest challenge. Model rankings shift across subsets, and scale alone does not predict success. Chain-of-Thought prompting benefits multi-step inference but yields mixed results elsewhere. EpiQAL provides fine-grained diagnostic signals for evidence grounding, inferential reasoning, and conclusion reconstruction.