π€ AI Summary
Integrating clinical practice guidelines into AI systems remains challenging due to poor interpretability, low adherence, and limited generalizability of existing approaches. This work proposes the first framework that automatically transforms narrative clinical guidelines into structured decision trees and generates executable prompts for large language models (LLMs). By integrating automated prompt engineering, dynamic reasoning, and synthetic case generation, the framework delivers high-fidelity, interpretable, and cross-domain clinical decision support. Empirical evaluation demonstrates its effectiveness: on a binary specialty referral task, the system achieves F1 scores ranging from 0.85 to 1.00, while on multi-path classification tasks, F1 scores range from 0.47 to 0.77, thereby delineating both its performance capabilities and applicability boundaries.
π Abstract
Clinical practice guidelines (CPGs) provide evidence-based recommendations for patient care; however, integrating them into Artificial Intelligence (AI) remains challenging. Previous approaches, such as rule-based systems, face significant limitations, including poor interpretability, inconsistent adherence to guidelines, and narrow domain applicability. To address this, we develop and validate CPGPrompt, an auto-prompting system that converts narrative clinical guidelines into large language models (LLMs). Our framework translates CPGs into structured decision trees and utilizes an LLM to dynamically navigate them for patient case evaluation. Synthetic vignettes were generated across three domains (headache, lower back pain, and prostate cancer) and distributed into four categories to test different decision scenarios. System performance was assessed on both binary specialty-referral decisions and fine-grained pathway-classification tasks. The binary specialty referral classification achieved consistently strong performance across all domains (F1: 0.85-1.00), with high recall (1.00 $\pm$ 0.00). In contrast, multi-class pathway assignment showed reduced performance, with domain-specific variations: headache (F1: 0.47), lower back pain (F1: 0.72), and prostate cancer (F1: 0.77). Domain-specific performance differences reflected the structure of each guideline. The headache guideline highlighted challenges with negation handling. The lower back pain guideline required temporal reasoning. In contrast, prostate cancer pathways benefited from quantifiable laboratory tests, resulting in more reliable decision-making.