TeXpert: A Multi-Level Benchmark for Evaluating LaTeX Code Generation by LLMs

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of large language models (LLMs) on LaTeX code generation for scientific documents. We introduce the first multi-level benchmark tailored to scientific typesetting—covering titles, mathematical formulas, tables, citations, and more—to enable end-to-end natural-language-to-LaTeX assessment. Methodologically, we propose a human-annotated, difficulty-stratified prompting strategy; multi-dimensional automated verification (including compilability, semantic equivalence, and formatting compliance); and a unified evaluation framework accommodating both open- and closed-source LLMs. Key contributions include: (1) the first empirical identification of capability gaps and prevalent error patterns in LLM-based LaTeX generation—formatting and macro-package errors account for 68% of failures; (2) evidence that state-of-the-art open-weight models (e.g., DeepSeek-v3) match or approach closed-source counterparts; and (3) confirmation that training data exhibit severe underrepresentation of diverse LaTeX constructs. Evaluations across 12 mainstream LLMs reveal a sharp accuracy decline with increasing task complexity—average drop exceeding 40%.

Technology Category

Application Category

📝 Abstract
LaTeX's precision and flexibility in typesetting have made it the gold standard for the preparation of scientific documentation. Large Language Models (LLMs) present a promising opportunity for researchers to produce publication-ready material using LaTeX with natural language instructions, yet current benchmarks completely lack evaluation of this ability. By introducing TeXpert, our benchmark dataset with natural language prompts for generating LaTeX code focused on components of scientific documents across multiple difficulty levels, we conduct an in-depth analysis of LLM performance in this regard and identify frequent error types. Our evaluation across open and closed-source LLMs highlights multiple key findings: LLMs excelling on standard benchmarks perform poorly in LaTeX generation with a significant accuracy drop-off as the complexity of tasks increases; open-source models like DeepSeek v3 and DeepSeek Coder strongly rival closed-source counterparts in LaTeX tasks; and formatting and package errors are unexpectedly prevalent, suggesting a lack of diverse LaTeX examples in the training datasets of most LLMs. Our dataset, code, and model evaluations are available at https://github.com/knowledge-verse-ai/TeXpert.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LaTeX code generation by LLMs for scientific documents
Assessing LLM performance across multiple difficulty levels
Identifying common LaTeX formatting and package errors in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level benchmark for LaTeX code generation
Natural language prompts for scientific LaTeX components
Evaluation of open and closed-source LLM performance
🔎 Similar Papers
No similar papers found.