PLSEMANTICSBENCH: Large Language Models As Programming Language Interpreters

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can execute programs solely from formal semantics—such as small-step operational semantics or K-framework semantics—acting as formal interpreters. To this end, we introduce PLSemanticsBench, the first benchmark for semantics-driven program execution, featuring (i) comparative evaluation across standard and non-standard semantics, (ii) multi-granularity assessment—including final state correctness, semantic rule prediction, and full execution trace fidelity—and (iii) three controllable program generation strategies: human-authored, LLM-translated, and fuzz-generated. Empirical results show that while state-of-the-art reasoning-focused LLMs benefit from formal semantics on simple programs, their performance degrades substantially on complex or non-standard semantics. This indicates reliance on surface-level pattern memorization rather than genuine semantic comprehension—formal semantics even impede performance in intricate scenarios. Our study is the first to systematically characterize the boundaries and mechanistic limitations of LLMs’ formal semantic execution capability.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) excel at code reasoning, a natural question arises: can an LLM execute programs (i.e., act as an interpreter) purely based on a programming language's formal semantics? If so, it will enable rapid prototyping of new programming languages and language features. We study this question using the imperative language IMP (a subset of C), formalized via small-step operational semantics (SOS) and rewriting-based operational semantics (K-semantics). We introduce three evaluation sets-Human-Written, LLM-Translated, and Fuzzer- Generated-whose difficulty is controlled by code-complexity metrics spanning the size, control-flow, and data-flow axes. Given a program and its semantics formalized with SOS/K-semantics, models are evaluated on three tasks ranging from coarse to fine: (1) final-state prediction, (2) semantic rule prediction, and (3) execution trace prediction. To distinguish pretraining memorization from semantic competence, we define two nonstandard semantics obtained through systematic mutations of the standard rules. Across strong code/reasoning LLMs, performance drops under nonstandard semantics despite high performance under the standard one. We further find that (i) there are patterns to different model failures, (ii) most reasoning models perform exceptionally well on coarse grained tasks involving reasoning about highly complex programs often containing nested loop depths beyond five, and surprisingly, (iii) providing formal semantics helps on simple programs but often hurts on more complex ones. Overall, the results show a promise that LLMs could serve as programming language interpreters, but points to the lack of their robust semantics understanding. We release the benchmark and the supporting code at https://github.com/EngineeringSoftware/PLSemanticsBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs as interpreters using formal programming language semantics
Testing semantic competence through standard and mutated operational semantics
Assessing execution capabilities across complexity-controlled program sets
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs interpret programs using formal semantics rules
Evaluates models on state, rule, and trace prediction tasks
Tests semantic understanding via mutated nonstandard semantics
🔎 Similar Papers
2024-02-08International Conference on Machine LearningCitations: 6