🤖 AI Summary
Clinical treatment decisions in high-risk settings demand intelligent agents with high reasoning accuracy and safety—yet existing approaches face bottlenecks in reliability and domain-specific adaptability.
Method: We propose TxAgent, an iterative retrieval-augmented generation framework that dynamically invokes authoritative biomedical tools (e.g., FDA Drug API, OpenTargets). It introduces a novel fine-grained supervision signal jointly modeling tool invocation sequences and reasoning chains, coupled with a retrieval-quality–tool-execution co-optimization mechanism to overcome the limitations of generic RAG in safety-critical healthcare applications.
Contribution/Results: Built upon a fine-tuned Llama-3.1-8B model, dynamic function-calling architecture, and the unified ToolUniverse toolkit, TxAgent achieves state-of-the-art performance on the CURE-Bench multimodal benchmark. It won the Open Science Excellence Award at NeurIPS 2025 CURE-Bench Challenge, significantly improving both tool-call accuracy (+24.7%) and reasoning correctness (+19.3%), empirically validating the pivotal role of retrieval quality in therapeutic decision-making.
📝 Abstract
Therapeutic decision-making in clinical medicine constitutes a high-stakes domain in which AI guidance interacts with complex interactions among patient characteristics, disease processes, and pharmacological agents. Tasks such as drug recommendation, treatment planning, and adverse-effect prediction demand robust, multi-step reasoning grounded in reliable biomedical knowledge. Agentic AI methods, exemplified by TxAgent, address these challenges through iterative retrieval-augmented generation (RAG). TxAgent employs a fine-tuned Llama-3.1-8B model that dynamically generates and executes function calls to a unified biomedical tool suite (ToolUniverse), integrating FDA Drug API, OpenTargets, and Monarch resources to ensure access to current therapeutic information. In contrast to general-purpose RAG systems, medical applications impose stringent safety constraints, rendering the accuracy of both the reasoning trace and the sequence of tool invocations critical. These considerations motivate evaluation protocols treating token-level reasoning and tool-usage behaviors as explicit supervision signals. This work presents insights derived from our participation in the CURE-Bench NeurIPS 2025 Challenge, which benchmarks therapeutic-reasoning systems using metrics that assess correctness, tool utilization, and reasoning quality. We analyze how retrieval quality for function (tool) calls influences overall model performance and demonstrate performance gains achieved through improved tool-retrieval strategies. Our work was awarded the Excellence Award in Open Science. Complete information can be found at https://curebench.ai/.