FDARxBench: Benchmarking Regulatory and Clinical Reasoning on FDA Generic Drug Assessment

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models struggle to meet the stringent demands of generic drug review—particularly in factual accuracy, multi-hop reasoning, and safe refusal—when processing U.S. FDA drug labeling documents. To address this gap, this work introduces FDARxBench, the first high-quality benchmark grounded in real-world FDA generic drug review scenarios. The benchmark features document-grounded question-answer pairs generated through an expert-in-the-loop, multi-stage pipeline and employs both open-book and closed-book evaluation protocols. Integrating regulatory expertise with the complexity of clinical texts, FDARxBench enables comprehensive assessment of models’ factuality, long-context retrieval, and refusal capabilities. Experimental results reveal significant performance deficiencies in both open-source and proprietary models, underscoring the benchmark’s rigor and its value in evaluating language models for regulatory applications.

Technology Category

Application Category

📝 Abstract
We introduce an expert curated, real-world benchmark for evaluating document-grounded question-answering (QA) motivated by generic drug assessment, using the U.S. Food and Drug Administration (FDA) drug label documents. Drug labels contain rich but heterogeneous clinical and regulatory information, making accurate question answering difficult for current language models. In collaboration with FDA regulatory assessors, we introduce FDARxBench, and construct a multi-stage pipeline for generating high-quality, expert curated, QA examples spanning factual, multi-hop, and refusal tasks, and design evaluation protocols to assess both open-book and closed-book reasoning. Experiments across proprietary and open-weight models reveal substantial gaps in factual grounding, long-context retrieval, and safe refusal behavior. While motivated by FDA generic drug assessment needs, this benchmark also provides a substantial foundation for challenging regulatory-grade evaluation of label comprehension. The benchmark is designed to support evaluation of LLM behavior on drug-label questions.
Problem

Research questions and friction points this paper is trying to address.

FDA
generic drug assessment
drug label
question answering
regulatory reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

document-grounded QA
regulatory reasoning
FDA drug labels
expert-curated benchmark
safe refusal
🔎 Similar Papers
No similar papers found.