🤖 AI Summary
Existing document image quality assessment (IQA) methods suffer from limited accuracy and robustness, especially under diverse degradation types. To address this, we propose DeQA-Doc—a novel framework that adapts the multimodal large language model (MLLM)-based DeQA-Score to the document domain for the first time. Our method introduces three key innovations: (1) a variance-free soft-label regression strategy to mitigate label noise and improve generalization; (2) a high-resolution, loosely constrained input mechanism to preserve fine-grained structural details critical for document analysis; and (3) a model ensemble scheme to enhance stability across heterogeneous degradations. Extensive experiments demonstrate that DeQA-Doc consistently outperforms state-of-the-art methods across multiple benchmark datasets, achieving superior accuracy, cross-degradation generalization, and robustness. The source code and pre-trained models are publicly released to foster reproducibility and further research.
📝 Abstract
Document quality assessment is critical for a wide range of applications including document digitization, OCR, and archival. However, existing approaches often struggle to provide accurate and robust quality scores, limiting their applicability in practical scenarios. With the rapid progress in Multi-modal Large Language Models (MLLMs), recent MLLM-based methods have achieved remarkable performance in image quality assessment. In this work, we extend this success to the document domain by adapting DeQA-Score, a state-of-the-art MLLM-based image quality scorer, for document quality assessment. We propose DeQA-Doc, a framework that leverages the visual language capabilities of MLLMs and a soft label strategy to regress continuous document quality scores. To adapt DeQA-Score to DeQA-Doc, we adopt two complementary solutions to construct soft labels without the variance information. Also, we relax the resolution constrains to support the large resolution of document images. Finally, we introduce ensemble methods to further enhance the performance. Extensive experiments demonstrate that DeQA-Doc significantly outperforms existing baselines, offering accurate and generalizable document quality assessment across diverse degradation types. Codes and model weights are available in https://github.com/Junjie-Gao19/DeQA-Doc.