Speech-to-LaTeX: New Models and Datasets for Converting Spoken Equations and Sentences

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses pronunciation ambiguity in converting spoken mathematical expressions into structured symbolic representations (e.g., LaTeX), proposing an end-to-end speech-to-LaTeX framework that bypasses conventional ASR-based two-stage transcription. The method integrates ASR post-processing, few-shot prompting, and audio-language modeling, trained on the first large-scale, open-source, multilingual mathematical speech dataset—comprising 66K cross-disciplinary, bilingual (English–Russian) utterances with fine-grained annotations. Key contributions include introducing two new benchmarks—S2L-sentences and S2L-equations—and significantly enhancing ambiguity modeling. Experiments demonstrate state-of-the-art performance: a 28% character error rate (CER) on MathSpeech (a 30% relative improvement over prior work), 27% CER on S2L-equations (vs. 64% baseline), and 40% CER on S2L-sentences, validating its practicality and advancement for educational and scientific speech transcription.

Technology Category

Application Category

📝 Abstract
Conversion of spoken mathematical expressions is a challenging task that involves transcribing speech into a strictly structured symbolic representation while addressing the ambiguity inherent in the pronunciation of equations. Although significant progress has been achieved in automatic speech recognition (ASR) and language models (LM), the problem of converting spoken mathematics into LaTeX remains underexplored. This task directly applies to educational and research domains, such as lecture transcription or note creation. Based on ASR post-correction, prior work requires 2 transcriptions, focuses only on isolated equations, has a limited test set, and provides neither training data nor multilingual coverage. To address these issues, we present the first fully open-source large-scale dataset, comprising over 66,000 human-annotated audio samples of mathematical equations and sentences in both English and Russian, drawn from diverse scientific domains. In addition to the ASR post-correction models and few-shot prompting, we apply audio language models, demonstrating comparable character error rate (CER) results on the MathSpeech benchmark (28% vs. 30%) for the equations conversion. In contrast, on the proposed S2L-equations benchmark, our models outperform the MathSpeech model by a substantial margin of more than 40 percentage points, even after accounting for LaTeX formatting artifacts (27% vs. 64%). We establish the first benchmark for mathematical sentence recognition (S2L-sentences) and achieve an equation CER of 40%. This work lays the groundwork for future advances in multimodal AI, with a particular focus on mathematical content recognition.
Problem

Research questions and friction points this paper is trying to address.

Converting spoken math to LaTeX with high accuracy
Addressing ambiguity in spoken equation pronunciation
Creating open datasets for multilingual math speech recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source dataset with 66k annotated math samples
Combines ASR post-correction and audio language models
Achieves 40% CER on new sentence recognition benchmark
🔎 Similar Papers
No similar papers found.