🤖 AI Summary
Traditional fault localization approaches struggle to handle semantic errors, while existing large language model (LLM)-based methods often produce stochastic, unverifiable outputs that conflate root causes with cascading effects. This work proposes SemLoc, a novel framework that introduces structured semantic grounding for LLM-based reasoning: it anchors free-form LLM-generated explanations to program-specific reference points, constructs a semantic violation spectrum via dynamic instrumentation, and incorporates a counterfactual verification mechanism to identify critical causal constraints. The approach enables runtime validation and cross-test attribution, achieving a Top-1 accuracy of 42.8% (Top-3: 68%) on the SemFault-250 benchmark while inspecting only 7.6% of code lines; ablation studies show that counterfactual verification contributes a 12% absolute gain in accuracy.
📝 Abstract
Fault localization identifies program locations responsible for observed failures. Existing techniques rank suspicious code using syntactic spectra--signals derived from execution structure such as statement coverage, control-flow divergence, or dependency reachability. These signals collapse for semantic bugs, where failing and passing executions follow identical code paths and differ only in whether semantic intent is satisfied. Recent LLM-based approaches introduce semantic reasoning but produce stochastic, unverifiable outputs that cannot be systematically cross-referenced across tests or distinguish root causes from cascading effects.
We present SemLoc, a fault localization framework based on structured semantic grounding. SemLoc converts free-form LLM reasoning into a closed intermediate representation that binds each inferred property to a typed program anchor, enabling runtime checking and attribution to program structure. It executes instrumented programs to construct a semantic violation spectrum--a constraint-by-test matrix--from which suspiciousness scores are derived analogously to coverage-based methods. A counterfactual verification step further prunes over-approximate constraints and isolates primary causal violations.
We evaluate SemLoc on SemFault-250, a corpus of 250 Python programs with single semantic faults. SemLoc outperforms five coverage-, reduction-, and LLM-based baselines, achieving Top-1 accuracy of 42.8% and Top-3 of 68%, while reducing inspection to 7.6% of executable lines. Counterfactual verification provides an additional 12% accuracy gain and identifies primary causal semantic constraints.