🤖 AI Summary
Existing biomedical question answering systems struggle with context-dependent reasoning under patient-specific conditions such as comorbidities and contraindications. To address this limitation, this work proposes CondMedQA—the first benchmark for condition-aware reasoning in biomedical QA—and introduces a Condition-Gated Reasoning (CGR) framework. CGR constructs a condition-aware knowledge graph to dynamically activate or prune reasoning paths, enabling context-sensitive multi-hop inference. Experimental results demonstrate that CGR significantly outperforms baseline models on CondMedQA and achieves state-of-the-art or competitive performance on standard biomedical QA benchmarks. These findings underscore the critical role of explicitly modeling conditional dependencies in enhancing the robustness of medical reasoning systems.
📝 Abstract
Current biomedical question answering (QA) systems often assume that medical knowledge applies uniformly, yet real-world clinical reasoning is inherently conditional: nearly every decision depends on patient-specific factors such as comorbidities and contraindications. Existing benchmarks do not evaluate such conditional reasoning, and retrieval-augmented or graph-based methods lack explicit mechanisms to ensure that retrieved knowledge is applicable to given context. To address this gap, we propose CondMedQA, the first benchmark for conditional biomedical QA, consisting of multi-hop questions whose answers vary with patient conditions. Furthermore, we propose Condition-Gated Reasoning (CGR), a novel framework that constructs condition-aware knowledge graphs and selectively activates or prunes reasoning paths based on query conditions. Our findings show that CGR more reliably selects condition-appropriate answers while matching or exceeding state-of-the-art performance on biomedical QA benchmarks, highlighting the importance of explicitly modeling conditionality for robust medical reasoning.