🤖 AI Summary
This work addresses the challenge of ensuring logical consistency in clinical text structuring, where inter-variable dependencies often lead to clinically implausible outputs that conventional large language models struggle to resolve. To this end, the authors propose a Deep Reflective Reasoning framework that introduces, for the first time, an iterative self-reflection mechanism into clinical information extraction. By integrating a large language model agent with domain-knowledge retrieval and explicit consistency verification, the framework explicitly models interdependencies among clinical variables and iteratively refines its outputs until convergence. Evaluated on three oncology tasks—colorectal cancer, Ewing sarcoma, and lung cancer—the approach demonstrates substantial performance gains, notably improving lung cancer staging accuracy from 0.680 to 0.833, thereby significantly enhancing both the logical coherence and clinical reliability of structured outputs.
📝 Abstract
Extracting structured information from clinical notes requires navigating a dense web of interdependent variables where the value of one attribute logically constrains others. Existing Large Language Model (LLM)-based extraction pipelines often struggle to capture these dependencies, leading to clinically inconsistent outputs. We propose deep reflective reasoning, a large language model agent framework that iteratively self-critiques and revises structured outputs by checking consistency among variables, the input text, and retrieved domain knowledge, stopping when outputs converge. We extensively evaluate the proposed method in three diverse oncology applications: (1) On colorectal cancer synoptic reporting from gross descriptions (n=217), reflective reasoning improved average F1 across eight categorical synoptic variables from 0.828 to 0.911 and increased mean correct rate across four numeric variables from 0.806 to 0.895; (2) On Ewing sarcoma CD99 immunostaining pattern identification (n=200), the accuracy improved from 0.870 to 0.927; (3) On lung cancer tumor staging (n=100), tumor stage accuracy improved from 0.680 to 0.833 (pT: 0.842 -> 0.884; pN: 0.885 -> 0.948). The results demonstrate that deep reflective reasoning can systematically improve the reliability of LLM-based structured data extraction under interdependence constraints, enabling more consistent machine-operable clinical datasets and facilitating knowledge discovery with machine learning and data science towards digital health.