Compact Multimodal Language Models as Robust OCR Alternatives for Noisy Textual Clinical Reports

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional OCR systems exhibit poor robustness and high privacy risks when transcribing obstetric ultrasound reports from Indian smartphones—images often degraded by motion blur, shadows, and other noise. Method: We propose a lightweight multimodal language model (MLLM) as an OCR alternative, enabling end-to-end vision-language joint modeling for on-device deployment and privacy-preserving inference. We systematically evaluate eight models across three dimensions: noise sensitivity, numerical recognition accuracy, and computational efficiency. Results: Our compact MLLM achieves significant improvements over state-of-the-art OCR and neural OCR baselines—+12.3% transcription accuracy, +18.7% F1 score for numeric entity extraction, and superior noise resilience—demonstrating strong suitability for low-resource clinical settings. This work presents the first empirical validation of lightweight MLLMs for real-world clinical document digitization, establishing both technical feasibility and clinical utility.

Technology Category

Application Category

📝 Abstract
Digitization of medical records often relies on smartphone photographs of printed reports, producing images degraded by blur, shadows, and other noise. Conventional OCR systems, optimized for clean scans, perform poorly under such real-world conditions. This study evaluates compact multimodal language models as privacy-preserving alternatives for transcribing noisy clinical documents. Using obstetric ultrasound reports written in regionally inflected medical English common to Indian healthcare settings, we compare eight systems in terms of transcription accuracy, noise sensitivity, numeric accuracy, and computational efficiency. Compact multimodal models consistently outperform both classical and neural OCR pipelines. Despite higher computational costs, their robustness and linguistic adaptability position them as viable candidates for on-premises healthcare digitization.
Problem

Research questions and friction points this paper is trying to address.

Developing compact multimodal models for noisy clinical document transcription
Addressing OCR performance degradation with blurred and shadowed medical images
Evaluating privacy-preserving alternatives for digitizing regional medical reports
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compact multimodal models replace OCR for noisy documents
Models outperform classical and neural OCR pipelines
Robust linguistic adaptability enables on-premises healthcare digitization
🔎 Similar Papers
No similar papers found.