DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit weak interpretability, unreliable reasoning, and poor cross-domain generalization when applied to document understanding in high-stakes domains such as law, finance, and healthcare. Method: We propose a rule-guided reinforcement learning framework that replaces static chain-of-thought (CoT) prompting with dynamic reasoning policy optimization. Built upon supervised fine-tuning, it jointly optimizes for structured reasoning paths, question reformulation, and key region localization via multi-objective rule-based rewards and KL-divergence regularization. Contribution/Results: Our approach significantly mitigates catastrophic forgetting and enhances both cross-task and cross-domain generalization. It achieves state-of-the-art performance and improved transparency on multiple document understanding benchmarks, empirically validating the efficacy of RL-driven dynamic, interpretable reasoning.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in document understanding. However, their reasoning processes remain largely black-box, making it difficult to ensure reliability and trustworthiness, especially in high-stakes domains such as legal, financial, and medical document analysis. Existing methods use fixed Chain-of-Thought (CoT) reasoning with supervised fine-tuning (SFT) but suffer from catastrophic forgetting, poor adaptability, and limited generalization across domain tasks. In this paper, we propose DocThinker, a rule-based Reinforcement Learning (RL) framework for dynamic inference-time reasoning. Instead of relying on static CoT templates, DocThinker autonomously refines reasoning strategies via policy learning, generating explainable intermediate results, including structured reasoning processes, rephrased questions, regions of interest (RoI) supporting the answer, and the final answer. By integrating multi-objective rule-based rewards and KL-constrained optimization, our method mitigates catastrophic forgetting and enhances both adaptability and transparency. Extensive experiments on multiple benchmarks demonstrate that DocThinker significantly improves generalization while producing more explainable and human-understandable reasoning steps. Our findings highlight RL as a powerful alternative for enhancing explainability and adaptability in MLLM-based document understanding. Code will be available at https://github.com/wenwenyu/DocThinker.
Problem

Research questions and friction points this paper is trying to address.

Ensures reliability in legal, financial, medical document analysis
Addresses catastrophic forgetting and poor adaptability in reasoning
Enhances explainability and generalization in document understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rule-based Reinforcement Learning for dynamic reasoning
Multi-objective rule-based rewards enhance transparency
KL-constrained optimization mitigates catastrophic forgetting
🔎 Similar Papers
No similar papers found.