🤖 AI Summary
Current whole-slide image (WSI) analysis methods lack explicit, interpretable reasoning processes, resulting in opaque predictions. To address this, we propose the first zero-shot, training-free large language model (LLM)-based agent framework for WSI analysis. Our method emulates pathologists’ dynamic visual inspection and reflective reasoning through three synergistic modules—navigation, perception, and execution—enabling traceable vision-language joint reasoning: iteratively localizing diagnostically salient regions, extracting morphological features, and generating natural-language reasoning chains. Evaluated on five diverse WSI datasets, our approach significantly outperforms specialized baselines. It demonstrates strong zero-shot generalization on both open-ended and constrained visual question answering tasks, while delivering clinically validated interpretability. This work establishes the first transparent, autonomous, and fine-tuning-free diagnostic reasoning paradigm for computational pathology.
📝 Abstract
Analyzing whole-slide images (WSIs) requires an iterative, evidence-driven reasoning process that parallels how pathologists dynamically zoom, refocus, and self-correct while collecting the evidence. However, existing computational pipelines often lack this explicit reasoning trajectory, resulting in inherently opaque and unjustifiable predictions. To bridge this gap, we present PathAgent, a training-free, large language model (LLM)-based agent framework that emulates the reflective, stepwise analytical approach of human experts. PathAgent can autonomously explore WSI, iteratively and precisely locating significant micro-regions using the Navigator module, extracting morphology visual cues using the Perceptor, and integrating these findings into the continuously evolving natural language trajectories in the Executor. The entire sequence of observations and decisions forms an explicit chain-of-thought, yielding fully interpretable predictions. Evaluated across five challenging datasets, PathAgent exhibits strong zero-shot generalization, surpassing task-specific baselines in both open-ended and constrained visual question-answering tasks. Moreover, a collaborative evaluation with human pathologists confirms PathAgent's promise as a transparent and clinically grounded diagnostic assistant.