π€ AI Summary
In high-stakes domains such as clinical decision-making, AI model explanations often lack verifiability, undermining practitioner trust. To address this, we propose the Action-based Reasoning Agent (ARA), which formalizes diagnostic explanations as auditable action sequences. ARA actively retrieves external visual evidence via reinforcement learning and incorporates a causal intervention mechanism to ensure explanation faithfulness and traceability. Our key contributions are: (1) explicit decomposition of the explanation process into verifiable, executable actions; and (2) quantification of the causal contribution of explanations to model decisions via evidence-masking experiments. Evaluated on medical image diagnosis tasks, ARA significantly improves calibration accuracy over non-interactive baselines, reducing the Brier score by 18%. Critically, masking pivotal evidence increases the Brier score by 0.029βempirically validating both the faithfulness and necessity of the generated explanations.
π Abstract
Explanations for AI models in high-stakes domains like medicine often lack verifiability, which can hinder trust. To address this, we propose an interactive agent that produces explanations through an auditable sequence of actions. The agent learns a policy to strategically seek external visual evidence to support its diagnostic reasoning. This policy is optimized using reinforcement learning, resulting in a model that is both efficient and generalizable. Our experiments show that this action-based reasoning process significantly improves calibrated accuracy, reducing the Brier score by 18% compared to a non-interactive baseline. To validate the faithfulness of the agent's explanations, we introduce a causal intervention method. By masking the visual evidence the agent chooses to use, we observe a measurable degradation in its performance ($Ξ$Brier=+0.029), confirming that the evidence is integral to its decision-making process. Our work provides a practical framework for building AI systems with verifiable and faithful reasoning capabilities.