🤖 AI Summary
Existing medical rule engines, such as Norway’s GURI system for cancer registration, face challenges in accurately and efficiently handling real-world clinical rules.
Method: We propose LLMeDiff—the first framework integrating large language models (LLMs) into differential testing of medical rule engines. It leverages GPT-3.5, Claude, Llama, and Gemini to automatically generate medical rule test cases and employs a dual-engine differential validation mechanism to detect logical inconsistencies.
Contribution/Results: We systematically evaluate LLMs across hallucination rate, generation success rate, and robustness; GPT-3.5 achieves the best overall performance—lowest hallucination, highest success rate, and strongest robustness—though with longest runtime. Our evaluation uncovers 22 clinically significant implementation discrepancies across GURI versions, demonstrating LLMeDiff’s efficacy in identifying subtle yet critical rule-engine inconsistencies. This work establishes a novel, empirically grounded paradigm for trustworthy verification of medical rule engines.
📝 Abstract
The Cancer Registry of Norway (CRN) uses an automated cancer registration support system (CaReSS) to support core cancer registry activities, i.e, data capture, data curation, and producing data products and statistics for various stakeholders. GURI is a core component of CaReSS, which is responsible for validating incoming data with medical rules. Such medical rules are manually implemented by medical experts based on medical standards, regulations, and research. Since large language models (LLMs) have been trained on a large amount of public information, including these documents, they can be employed to generate tests for GURI. Thus, we propose an LLM-based test generation and differential testing approach (LLMeDiff) to test GURI. We experimented with four different LLMs, two medical rule engine implementations, and 58 real medical rules to investigate the hallucination, success, time efficiency, and robustness of the LLMs to generate tests, and these tests' ability to find potential issues in GURI. Our results showed that GPT-3.5 hallucinates the least, is the most successful, and is generally the most robust; however, it has the worst time efficiency. Our differential testing revealed 22 medical rules where implementation inconsistencies were discovered (e.g., regarding handling rule versions). Finally, we provide insights for practitioners and researchers based on the results.