Medical Data Pecking: A Context-Aware Approach for Automated Quality Evaluation of Structured Medical Data

πŸ“… 2025-07-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Electronic health records (EHRs) are widely used in epidemiology and AI research, yet their data quality suffers from subgroup bias, systematic errors, and insufficient applicability assessment. To address these challenges, we propose a context-aware, automated medical data quality evaluation frameworkβ€”the first to adapt software engineering principles of unit testing and coverage analysis to EHR validation. Our method integrates large language models (LLMs) for test case generation, medical knowledge anchoring, and research-context-driven data fitness analysis. Based on this, we develop MDPT, a tool comprising a test generator and executor. Evaluated on All of Us, MIMIC-III, and SyntheticMass datasets, MDPT generates 55–73 tests per cohort and detects 20–43 instances of data inconsistency or anomaly. The approach significantly improves both accuracy and interpretability in assessing EHR suitability for downstream research.

Technology Category

Application Category

πŸ“ Abstract
Background: The use of Electronic Health Records (EHRs) for epidemiological studies and artificial intelligence (AI) training is increasing rapidly. The reliability of the results depends on the accuracy and completeness of EHR data. However, EHR data often contain significant quality issues, including misrepresentations of subpopulations, biases, and systematic errors, as they are primarily collected for clinical and billing purposes. Existing quality assessment methods remain insufficient, lacking systematic procedures to assess data fitness for research. Methods: We present the Medical Data Pecking approach, which adapts unit testing and coverage concepts from software engineering to identify data quality concerns. We demonstrate our approach using the Medical Data Pecking Tool (MDPT), which consists of two main components: (1) an automated test generator that uses large language models and grounding techniques to create a test suite from data and study descriptions, and (2) a data testing framework that executes these tests, reporting potential errors and coverage. Results: We evaluated MDPT on three datasets: All of Us (AoU), MIMIC-III, and SyntheticMass, generating 55-73 tests per cohort across four conditions. These tests correctly identified 20-43 non-aligned or non-conforming data issues. We present a detailed analysis of the LLM-generated test suites in terms of reference grounding and value accuracy. Conclusion: Our approach incorporates external medical knowledge to enable context-sensitive data quality testing as part of the data analysis workflow to improve the validity of its outcomes. Our approach tackles these challenges from a quality assurance perspective, laying the foundation for further development such as additional data modalities and improved grounding methods.
Problem

Research questions and friction points this paper is trying to address.

Automated quality evaluation of structured medical data
Identifying data quality issues in Electronic Health Records
Improving reliability of EHR data for research and AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts software engineering unit testing for medical data
Uses LLMs to generate automated test suites
Incorporates external knowledge for context-sensitive testing
πŸ”Ž Similar Papers
No similar papers found.
I
Irena Girshovitz
School of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel
A
Atai Ambus
AI and Data Science Center of Tel Aviv University (TAD), Tel Aviv, Israel
M
Moni Shahar
AI and Data Science Center of Tel Aviv University (TAD), Tel Aviv, Israel
Ran Gilad-Bachrach
Ran Gilad-Bachrach
Tel-Aviv University
Machine LearningPrivate AIBehavioral changesSocial emotional learningMachine Teaching