🤖 AI Summary
To address challenges in knowledge-intensive biomedical question answering—including difficulty integrating structured and unstructured knowledge, unreliable multi-step reasoning, and poor cross-regional generalization—this paper proposes a “Generate–Verify–Refine” three-stage knowledge graph–enhanced reasoning paradigm. First, a large language model (LLM) generates candidate biomedical triples; second, these are verified for consistency against a biomedical knowledge graph and filtered dynamically using contextual constraints; third, answers are refined via a multimodal mechanism combining logical rules, prototype matching, and case-based reasoning. This work establishes the first LLM-driven, trustworthy closed-loop multi-step reasoning framework, enabling zero-shot transfer to resource-constrained regions (e.g., Africa). On mainstream biomedical QA benchmarks, our method achieves a 5.2% absolute accuracy gain over 15 baselines; on a high-semantic-complexity in-house dataset, it improves accuracy by 10.4%; and on AfriMed-QA, it demonstrates strong zero-shot generalization capability.
📝 Abstract
Biomedical reasoning integrates structured, codified knowledge with tacit, experience-driven insights. Depending on the context, quantity, and nature of available evidence, researchers and clinicians use diverse strategies, including rule-based, prototype-based, and case-based reasoning. Effective medical AI models must handle this complexity while ensuring reliability and adaptability. We introduce KGARevion, a knowledge graph-based agent that answers knowledge-intensive questions. Upon receiving a query, KGARevion generates relevant triplets by leveraging the latent knowledge embedded in a large language model. It then verifies these triplets against a grounded knowledge graph, filtering out errors and retaining only accurate, contextually relevant information for the final answer. This multi-step process strengthens reasoning, adapts to different models of medical inference, and outperforms retrieval-augmented generation-based approaches that lack effective verification mechanisms. Evaluations on medical QA benchmarks show that KGARevion improves accuracy by over 5.2% over 15 models in handling complex medical queries. To further assess its effectiveness, we curated three new medical QA datasets with varying levels of semantic complexity, where KGARevion improved accuracy by 10.4%. The agent integrates with different LLMs and biomedical knowledge graphs for broad applicability across knowledge-intensive tasks. We evaluated KGARevion on AfriMed-QA, a newly introduced dataset focused on African healthcare, demonstrating its strong zero-shot generalization to underrepresented medical contexts.