🤖 AI Summary
Biomedical research lacks evaluation benchmarks supporting multi-hop, multi-answer reasoning—particularly for modeling complex one-to-many and many-to-many relationships. Method: We introduce BioHopR, the first benchmark for multi-hop, multi-answer reasoning over the biomedical knowledge graph PrimeKG, covering 1-hop and 2-hop paths among drugs, diseases, proteins, and other entities. It employs path sampling and semantic validation to generate high-quality, semantically grounded question-answer pairs that explicitly encode intricate relational structures. Contribution/Results: Experiments reveal severe performance degradation in state-of-the-art LLMs (e.g., GPT-4o, Llama-3.3) on 2-hop reasoning—dropping to as low as 14.57%—exposing systemic limitations in implicit multi-hop inference. In contrast, O3-mini achieves 37.93% and 14.57% accuracy on 1-hop and 2-hop tasks, respectively, demonstrating BioHopR’s sensitivity and discriminative power for evaluating advanced biomedical reasoning capabilities.
📝 Abstract
Biomedical reasoning often requires traversing interconnected relationships across entities such as drugs, diseases, and proteins. Despite the increasing prominence of large language models (LLMs), existing benchmarks lack the ability to evaluate multi-hop reasoning in the biomedical domain, particularly for queries involving one-to-many and many-to-many relationships. This gap leaves the critical challenges of biomedical multi-hop reasoning underexplored. To address this, we introduce BioHopR, a novel benchmark designed to evaluate multi-hop, multi-answer reasoning in structured biomedical knowledge graphs. Built from the comprehensive PrimeKG, BioHopR includes 1-hop and 2-hop reasoning tasks that reflect real-world biomedical complexities. Evaluations of state-of-the-art models reveal that O3-mini, a proprietary reasoning-focused model, achieves 37.93% precision on 1-hop tasks and 14.57% on 2-hop tasks, outperforming proprietary models such as GPT4O and open-source biomedical models including HuatuoGPT-o1-70B and Llama-3.3-70B. However, all models exhibit significant declines in multi-hop performance, underscoring the challenges of resolving implicit reasoning steps in the biomedical domain. By addressing the lack of benchmarks for multi-hop reasoning in biomedical domain, BioHopR sets a new standard for evaluating reasoning capabilities and highlights critical gaps between proprietary and open-source models while paving the way for future advancements in biomedical LLMs.