NeuRaLaTeX: A machine learning library written in pure LaTeX

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses LaTeX’s longstanding lack of executable machine learning capabilities by introducing the first fully LaTeX-native deep learning library. Leveraging TeX macro programming, recursive expansion, floating-point arithmetic simulation, and pseudorandom number generation, the library enables direct specification—within .tex source files—of neural architectures, loss functions, synthetic or loaded data, hyperparameters, and training procedures; full model training, evaluation, visualization, and result export are executed automatically during compilation. We formally define two novel metrics—WIL (Written In LaTeX) and SCOMISCOP (Self-Contained, Observable, Machine-learning-capable, Interpretable, Scalable, Compile-time-Only, Portable)—to quantify LaTeX’s emergent ML programmability. All implementation code is 100% embedded in the paper’s source. Within 48 hours, we successfully generated the spiral dataset, trained and evaluated a two-layer MLP, and auto-generated figures and tables—achieving state-of-the-art performance on both metrics.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce NeuRaLaTeX, which we believe to be the first deep learning library written entirely in LaTeX. As part of your LaTeX document you can specify the architecture of a neural network and its loss functions, define how to generate or load training data, and specify training hyperparameters and experiments. When the document is compiled, the LaTeX compiler will generate or load training data, train the network, run experiments, and generate figures. This paper generates a random 100 point spiral dataset, trains a two layer MLP on it, evaluates on a different random spiral dataset, produces plots and tables of results. The paper took 48 hours to compile and the entire source code for NeuRaLaTeX is contained within the source code of the paper. We propose two new metrics: the Written In Latex (WIL) metric measures the proportion of a machine learning library that is written in pure LaTeX, while the Source Code Of Method in Source Code of Paper (SCOMISCOP) metric measures the proportion of a paper's implementation that is contained within the paper source. We are state-of-the-art for both metrics, outperforming the ResNet and Transformer papers, as well as the PyTorch and Tensorflow libraries. Source code, documentation, videos, crypto scams and an invitation to invest in the commercialisation of NeuRaLaTeX are available at https://www.neuralatex.com
Problem

Research questions and friction points this paper is trying to address.

Creating a deep learning library entirely in LaTeX
Training neural networks directly within LaTeX documents
Introducing metrics for LaTeX-based ML library evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning library in pure LaTeX
LaTeX compiler trains neural networks
Metrics for LaTeX-based ML libraries
🔎 Similar Papers
No similar papers found.
J
James A. D. Gardner
Department of Computer Science, University of York, UK
W
Will Rowan
Department of Computer Science, University of York, UK
William A. P. Smith
William A. P. Smith
Professor, Department of Computer Science, University of York
Computer VisionComputer GraphicsMachine Learning