Which Tool Response Should I Trust? Tool-Expertise-Aware Chest X-ray Agent with Multimodal Agentic Learning

πŸ“… 2026-02-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a tool-expertise-aware agent for chest X-ray analysis that addresses the challenge of conflicting and erroneous outputs from existing medical AI tools, which current agents struggle to reconcile due to a lack of awareness regarding their varying reliability. The proposed approach extends multi-turn tool-calling reinforcement learning to multimodal medical settings for the first time, enabling parallel single-turn invocation of multiple tools and processing of multi-image inputs. It incorporates a multimodal self-learning mechanism that dynamically evaluates each tool’s credibility under diverse queries, coupled with reward signals to optimize trust decisions and achieve fine-grained modeling of tool expertise. Experiments demonstrate that the method significantly outperforms state-of-the-art approaches and various baselines in chest X-ray analysis, substantially enhancing the decision reliability of medical AI agents.

Technology Category

Application Category

πŸ“ Abstract
AI agents with tool-use capabilities show promise for integrating the domain expertise of various tools. In the medical field, however, tools are usually AI models that are inherently error-prone and can produce contradictory responses. Existing research on medical agents lacks sufficient understanding of the tools' realistic reliability and thus cannot effectively resolve tool conflicts. To address this gap, this paper introduces a framework that enables an agent to interact with tools and empirically learn their practical trustworthiness across different types of multimodal queries via agentic learning. As a concrete instantiation, we focus on chest X-ray analysis and present a tool-expertise-aware chest X-ray agent (TEA-CXA). When tool outputs disagree, the agent experimentally accepts or rejects multimodal tool results, receives rewards, and learns which tool to trust for each query type. Importantly, TEA-CXA extends existing codebases for reinforcement learning with multi-turn tool-calling that focus on textual inputs, to support multimodal contexts effectively. In addition, we enhance the codebase for medical use scenarios by supporting multiple tool calls in one turn, parallel tool inference, and multi-image accommodation within a single user query. Our code framework is applicable to general medical research on multi-turn tool-calling reinforcement learning in multimodal settings. Experiments show that TEA-CXA outperforms the state-of-the-art methods and a comprehensive set of baselines. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

tool reliability
medical AI agents
chest X-ray analysis
multimodal tool conflict
trustworthiness
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal agentic learning
tool-expertise awareness
reinforcement learning
tool reliability
chest X-ray analysis
πŸ”Ž Similar Papers
No similar papers found.