CodePDE: An Inference Framework for LLM-driven PDE Solver Generation

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional PDE solvers rely heavily on expert knowledge and incur high computational costs, while neural-network-based solvers require large labeled datasets and lack interpretability. To address these limitations, this paper introduces the first pure reasoning-time large language model (LLM) framework dedicated to PDE solving. Our approach formulates PDE solving as a code-generation task—requiring no fine-tuning—and leverages reasoning-time search, symbolic-numerical co-verification, self-debugging, self-refinement, and test-time scaling to autonomously generate executable, verifiable numerical solvers. Evaluated on diverse canonical PDEs, our method significantly outperforms conventional solvers. We further uncover intrinsic LLM behaviors governing accuracy, efficiency, and numerical scheme selection. The complete codebase is open-sourced, establishing a new paradigm for interpretable, low-data-dependency AI in scientific computing.

Technology Category

Application Category

📝 Abstract
Partial differential equations (PDEs) are fundamental to modeling physical systems, yet solving them remains a complex challenge. Traditional numerical solvers rely on expert knowledge to implement and are computationally expensive, while neural-network-based solvers require large training datasets and often lack interpretability. In this work, we frame PDE solving as a code generation task and introduce CodePDE, the first inference framework for generating PDE solvers using large language models (LLMs). Leveraging advanced inference-time algorithms and scaling strategies, CodePDE unlocks critical capacities of LLM for PDE solving: reasoning, debugging, selfrefinement, and test-time scaling -- all without task-specific tuning. CodePDE achieves superhuman performance across a range of representative PDE problems. We also present a systematic empirical analysis of LLM generated solvers, analyzing their accuracy, efficiency, and numerical scheme choices. Our findings highlight the promise and the current limitations of LLMs in PDE solving, offering a new perspective on solver design and opportunities for future model development. Our code is available at https://github.com/LithiumDA/CodePDE.
Problem

Research questions and friction points this paper is trying to address.

Generates PDE solvers using LLMs without task-specific tuning
Addresses complexity and cost of traditional numerical PDE solvers
Overcomes data needs and interpretability issues of neural-network solvers
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven PDE solver generation framework
Advanced inference-time algorithms and scaling
Superhuman performance without task-specific tuning
Shanda Li
Shanda Li
Carnegie Mellon University
Machine Learning
T
Tanya Marwah
Flatiron Institute, Polymathic AI
Junhong Shen
Junhong Shen
Ph.D. student in Machine Learning, Carnegie Mellon University
W
Weiwei Sun
School of Computer Science, Carnegie Mellon University
Andrej Risteski
Andrej Risteski
Carnegie Mellon University
Machine LearningTheoretical Computer Science
Y
Yiming Yang
School of Computer Science, Carnegie Mellon University
Ameet Talwalkar
Ameet Talwalkar
CMU, Datadog
Machine Learning