Table2LaTeX-RL: High-Fidelity LaTeX Code Generation from Table Images via Reinforced Multimodal Language Models

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of generating high-fidelity LaTeX code from complex table images—characterized by large dimensions, deep nesting, rich semantics, or irregular content—this paper proposes an end-to-end dual-reward reinforcement learning framework. The method leverages a multimodal large language model and jointly optimizes for two complementary rewards: LaTeX structural correctness and visual fidelity of the rendered output. Policy optimization is performed via Group Relative Policy Optimization (GRPO), and the model is fine-tuned on a large-scale table–LaTeX paired dataset. Evaluation employs a hybrid metric combining TEDS-Structure and CW-SSIM. Experiments demonstrate state-of-the-art performance across multiple benchmarks, with particularly pronounced gains on structurally complex tables. The approach achieves high accuracy, strong robustness to diverse table layouts, and publication-ready output quality.

Technology Category

Application Category

📝 Abstract
In this work, we address the task of table image to LaTeX code generation, with the goal of automating the reconstruction of high-quality, publication-ready tables from visual inputs. A central challenge of this task lies in accurately handling complex tables -- those with large sizes, deeply nested structures, and semantically rich or irregular cell content -- where existing methods often fail. We begin with a comprehensive analysis, identifying key challenges and highlighting the limitations of current evaluation protocols. To overcome these issues, we propose a reinforced multimodal large language model (MLLM) framework, where a pre-trained MLLM is fine-tuned on a large-scale table-to-LaTeX dataset. To further improve generation quality, we introduce a dual-reward reinforcement learning strategy based on Group Relative Policy Optimization (GRPO). Unlike standard approaches that optimize purely over text outputs, our method incorporates both a structure-level reward on LaTeX code and a visual fidelity reward computed from rendered outputs, enabling direct optimization of the visual output quality. We adopt a hybrid evaluation protocol combining TEDS-Structure and CW-SSIM, and show that our method achieves state-of-the-art performance, particularly on structurally complex tables, demonstrating the effectiveness and robustness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Automating reconstruction of publication-ready tables from visual inputs
Accurately handling complex tables with nested structures and irregular content
Improving visual output quality through dual-reward reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforced multimodal language model fine-tuned on table data
Dual-reward reinforcement learning with Group Relative Policy Optimization
Combines structure-level and visual fidelity rewards for optimization
🔎 Similar Papers
No similar papers found.
Jun Ling
Jun Ling
Shanghai Jiao Tong University
Computer VisonTalking Face SynthesisAvatar
Y
Yao Qi
Research Center for Scientific Data Hub, Zhejiang Lab, Hangzhou, China
T
Tao Huang
School of Computer Science and Engineering, University of Electronic Science and Technology of China
S
Shibo Zhou
Research Center for Scientific Data Hub, Zhejiang Lab, Hangzhou, China
Y
Yanqin Huang
Research Center for Scientific Data Hub, Zhejiang Lab, Hangzhou, China
Jiang Yang
Jiang Yang
Research Center for Scientific Data Hub, Zhejiang Lab, Hangzhou, China
Z
Ziqi Song
Research Center for Scientific Data Hub, Zhejiang Lab, Hangzhou, China
Y
Ying Zhou
Research Center for Scientific Data Hub, Zhejiang Lab, Hangzhou, China
Y
Yang Yang
School of Computer Science and Engineering, University of Electronic Science and Technology of China
H
Heng Tao Shen
School of Computer Science and Engineering, University of Electronic Science and Technology of China
P
Peng Wang
School of Computer Science and Engineering, University of Electronic Science and Technology of China