🤖 AI Summary
Small language models (SLMs) frequently generate syntactically invalid or semantically flawed probabilistic programs—e.g., ill-defined distributions or mismatched inference structures—hindering their reliable use in probabilistic programming.
Method: We propose LM-PPL, a language model–driven probabilistic program synthesis framework that integrates domain-expert–defined semantic constraints with diagnosis-aware, fine-grained repair strategies. LM-PPL performs dynamic resampling of prior/likelihood components, iterative semantic validation, and reliability checking to enable structured error correction and parameter optimization.
Contribution/Results: Unlike black-box generation relying on large language models (LLMs), LM-PPL significantly improves both syntactic correctness and statistical reliability of SLM-generated programs under strict domain constraints. Experiments across multiple probabilistic programming tasks demonstrate that LM-PPL matches or surpasses closed-source LLMs (e.g., OpenAI o3) in accuracy and robustness, establishing a new paradigm for trustworthy statistical programming in resource-constrained settings.
📝 Abstract
Probabilistic programming offers a powerful framework for modeling uncertainty, yet statistical model discovery in this domain entails navigating an immense search space under strict domain-specific constraints. When small language models are tasked with generating probabilistic programs, they frequently produce outputs that suffer from both syntactic and semantic errors, such as flawed inference constructs. Motivated by probabilistic programmers' domain expertise and debugging strategies, we introduce RefineStat, a language model--driven framework that enforces semantic constraints ensuring synthesized programs contain valid distributions and well-formed parameters, and then applies diagnostic-aware refinement by resampling prior or likelihood components whenever reliability checks fail. We evaluate RefineStat on multiple probabilistic-programming code-generation tasks using smaller language models (SLMs) and find that it produces programs that are both syntactically sound and statistically reliable, often matching or surpassing those from closed-source large language models (e.g., OpenAI o3).