Exploring the robustness of TractOracle methods in RL-based tractography

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the high false-positive rate and insufficient anatomical validity in reinforcement learning (RL)-based fiber tractography. To this end, we propose TractOracle-RL—a novel RL framework for robust and interpretable white matter tract reconstruction. Its key contributions are threefold: (1) embedding anatomical priors into the RL framework via an interpretable oracle-guided mechanism; (2) introducing iterative reward training (IRT), which jointly optimizes the oracle through dynamic streamline filtering; and (3) designing a multi-stage reward shaping scheme and cross-dataset collaborative validation strategy, inspired by reinforcement learning from human feedback (RLHF). Experiments across five public diffusion MRI datasets demonstrate that TractOracle-RL significantly reduces the false-positive rate (average reduction of 37.2%), enhances anatomical plausibility of reconstructed tracts, and improves scan-to-scan consistency—establishing a new paradigm for reliable, anatomy-aware, and interpretable tractography.

Technology Category

Application Category

📝 Abstract
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.
Problem

Research questions and friction points this paper is trying to address.

Evaluating robustness of TractOracle-RL in tractography
Extending TractOracle-RL with advanced RL techniques
Improving accuracy via Iterative Reward Training (IRT)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates anatomical priors via reward-based mechanism
Introduces Iterative Reward Training (IRT) for refinement
Combines oracle with RL for robust tractography
🔎 Similar Papers
J
Jeremi Levesque
Department of Computer Science, Faculty of Science, University of Sherbrooke
A
Antoine Théberge
Department of Computer Science, Faculty of Science, University of Sherbrooke
Maxime Descoteaux
Maxime Descoteaux
Professor of Computer Science, Université de Sherbrooke
Medical image analysisbrain connectivitydiffusion MRItractography
Pierre-Marc Jodoin
Pierre-Marc Jodoin
Université de Sherbrooke
Machine learningvideo analyticsmedical image analysis