PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of large language models (LLMs) predominantly rely on human-crafted multiple-choice questions, which inadequately capture their utility and safety in complex clinical contexts such as pancreatic cancer. To address this gap, this work introduces PanCanBench—the first pancreatic cancer–specific benchmark grounded in real patient queries—and establishes a fine-grained scoring rubric through an expert-in-the-loop human-AI collaborative process. Using an LLM-as-a-judge framework, the study systematically evaluates 22 leading models on clinical completeness, factual accuracy, and web search integration. Results reveal model completeness scores ranging from 46.5% to 82.3%, with hallucination rates as high as 6.0%–53.8%. Notably, enabling web search or employing advanced reasoning strategies does not consistently improve factual accuracy, highlighting a significant disconnect between reasoning capabilities and factual consistency.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety. As patients and clinicians increasingly use LLMs for guidance on complex conditions such as pancreatic cancer, evaluation must extend beyond general medical knowledge. Existing frameworks, such as HealthBench, rely on simulated queries and lack disease-specific depth. Moreover, high rubric-based scores do not ensure factual correctness, underscoring the need to assess hallucinations. We developed a human-in-the-loop pipeline to create expert rubrics for de-identified patient questions from the Pancreatic Cancer Action Network (PanCAN). The resulting benchmark, PanCanBench, includes 3,130 question-specific criteria across 282 authentic patient questions. We evaluated 22 proprietary and open-source LLMs using an LLM-as-a-judge framework, measuring clinical completeness, factual accuracy, and web-search integration. Models showed substantial variation in rubric-based completeness, with scores ranging from 46.5% to 82.3%. Factual errors were common, with hallucination rates (the percentages of responses containing at least one factual error) ranging from 6.0% for Gemini-2.5 Pro and GPT-4o to 53.8% for Llama-3.1-8B. Importantly, newer reasoning-optimized models did not consistently improve factuality: although o3 achieved the highest rubric score, it produced inaccuracies more frequently than other GPT-family models. Web-search integration did not inherently guarantee better responses. The average score changed from 66.8% to 63.9% for Gemini-2.5 Pro and from 73.8% to 72.8% for GPT-5 when web search was enabled. Synthetic AI-generated rubrics inflated absolute scores by 17.9 points on average while generally maintaining similar relative ranking.
Problem

Research questions and friction points this paper is trying to address.

pancreatic cancer
large language models
clinical evaluation
hallucination
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

PanCanBench
human-in-the-loop
hallucination evaluation
disease-specific benchmark
LLM-as-a-judge
🔎 Similar Papers
No similar papers found.
Yimin Zhao
Yimin Zhao
National University of Singapore
RoboticsEEGDeep Learning
S
Sheela R. Damle
Clinical Research Division, Fred Hutch Cancer Center; Division of Hematology and Oncology, Department of Medicine, University of Washington
S
Simone E. Dekker
Clinical Research Division, Fred Hutch Cancer Center; Division of Hematology and Oncology, Department of Medicine, University of Washington
S
Scott Geng
Allen Institute for AI; Department of Computer Science and Engineering, University of Washington
K
Karly Williams Silva
Clinical Research Division, Fred Hutch Cancer Center; Division of Hematology and Oncology, Department of Medicine, University of Washington
J
Jesse J Hubbard
Clinical Research Division, Fred Hutch Cancer Center; Division of Hematology and Oncology, Department of Medicine, University of Washington
M
Manuel F Fernandez
Clinical Research Division, Fred Hutch Cancer Center; Division of Hematology and Oncology, Department of Medicine, University of Washington
F
Fatima Zelada-Arenas
Pancreatic Cancer Action Network
A
Alejandra Alvarez
Pancreatic Cancer Action Network
B
Brianne Flores
Pancreatic Cancer Action Network
A
Alexis Rodriguez
Pancreatic Cancer Action Network
S
Stephen Salerno
Public Health Sciences, Biostatistics, Fred Hutchinson Cancer Center
C
Carrie Wright
Public Health Sciences, Biostatistics, Fred Hutchinson Cancer Center
Z
Zihao Wang
Independent Researcher
Pang Wei Koh
Pang Wei Koh
University of Washington; Allen Institute for AI
Machine learningNatural language processingComputational biology
J
Jeffrey T. Leek
Department of Biostatistics, University of Washington; Public Health Sciences, Biostatistics, Fred Hutchinson Cancer Center