🤖 AI Summary
This work proposes a framework integrating differentiable neuro-symbolic abductive reasoning with active learning to address key challenges in automatic radiology report generation, including vision–language misalignment, factual inconsistency, and the absence of explicit multi-hop clinical reasoning. The approach maps images to probabilistic clinical concepts, constructs differentiable logical inference chains, and decodes findings into structured reports via templated clauses. To refine the reasoning process, an active sampling strategy is introduced that leverages both rule uncertainty and sample diversity, enabling iterative improvement of inference rules and prompt templates through clinician feedback. Evaluated on standard benchmarks, the method significantly outperforms representative baselines in both factual consistency and linguistic quality.
📝 Abstract
Automatic generation of radiology reports seeks to reduce clinician workload while improving documentation consistency. Existing methods that adopt encoder-decoder or retrieval-augmented pipelines achieve progress in fluency but remain vulnerable to visual-linguistic biases, factual inconsistency, and lack of explicit multi-hop clinical reasoning. We present NeuroSymb-MRG, a unified framework that integrates NeuroSymbolic abductive reasoning with active uncertainty minimization to produce structured, clinically grounded reports. The system maps image features to probabilistic clinical concepts, composes differentiable logic-based reasoning chains, decodes those chains into templated clauses, and refines the textual output via retrieval and constrained language-model editing. An active sampling loop driven by rule-level uncertainty and diversity guides clinician-in-the-loop adjudication and promptbook refinement. Experiments on standard benchmarks demonstrate consistent improvements in factual consistency and standard language metrics compared to representative baselines.