Dynamic Bayesian Optimization Framework for Instruction Tuning in Partial Differential Equation Discovery

📅 2025-12-31
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of large language models (LLMs) in symbolic discovery of partial differential equations (PDEs), which arises from their reliance on static prompts and inability to adapt to the dynamic demands of multi-step reasoning. To overcome this limitation, the authors formulate prompt engineering as a sequential decision-making problem and propose an adaptive prompt selection mechanism. By maintaining a discrete library of reasoning strategies and integrating Bayesian optimization with numerical feedback, the method dynamically selects the optimal instruction at each reasoning step to guide symbolic generation. This approach represents the first application of dynamic Bayesian optimization to instruction tuning in PDE discovery, significantly outperforming fixed-prompt baselines by achieving higher equation recovery rates and yielding more concise analytical expressions on standard benchmarks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) show promise for equation discovery, yet their outputs are highly sensitive to prompt phrasing, a phenomenon we term instruction brittleness. Static prompts cannot adapt to the evolving state of a multi-step generation process, causing models to plateau at suboptimal solutions. To address this, we propose NeuroSymBO, which reframes prompt engineering as a sequential decision problem. Our method maintains a discrete library of reasoning strategies and uses Bayesian Optimization to select the optimal instruction at each step based on numerical feedback. Experiments on PDE discovery benchmarks show that adaptive instruction selection significantly outperforms fixed prompts, achieving higher recovery rates with more parsimonious solutions.
Problem

Research questions and friction points this paper is trying to address.

instruction brittleness
prompt engineering
PDE discovery
Large Language Models
adaptive instruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Bayesian Optimization
Instruction Tuning
Large Language Models
PDE Discovery
Prompt Engineering
🔎 Similar Papers
2024-10-09International Conference on Learning RepresentationsCitations: 2
J
Junqi Qu
Department of Computer Science, Florida State University
Y
Yan Zhang
Department of Computer Science, Florida State University
Shangqian Gao
Shangqian Gao
Florida State University
Computer VisionNatural Lanugage ProcessingMachine Learning
Shibo Li
Shibo Li
Florida State University
Machine learningBayesian LearningGraphical ModelsOptimization