Code over Words: Overcoming Semantic Inertia via Code-Grounded Reasoning

๐Ÿ“… 2026-01-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the susceptibility of large language models to semantic inertia when confronted with dynamic rules that contradict pretraining priorsโ€”such as the assumption that โ€œlava is dangerous.โ€ To mitigate this, the authors propose Code-Grounded Vistas, a method that encodes rules as executable code rather than natural language and incorporates counterfactual fine-tuning based on the puzzle game *Baba Is You*. This approach compels the model to prioritize logical constraints over ingrained semantic associations. By effectively decoupling semantics from logic, the method substantially improves reasoning accuracy in scenarios involving conflicting rules. Notably, it outperforms costly test-time search strategies and reverses the inverse scaling phenomenon typically observed in such tasks, thereby underscoring the critical role of representational format in shaping model capabilities.

Technology Category

Application Category

๐Ÿ“ Abstract
LLMs struggle with Semantic Inertia: the inability to inhibit pre-trained priors (e.g.,"Lava is Dangerous") when dynamic, in-context rules contradict them. We probe this phenomenon using Baba Is You, where physical laws are mutable text rules, enabling precise evaluation of models'ability to override learned priors when rules change. We quantatively observe that larger models can exhibit inverse scaling: they perform worse than smaller models when natural language reasoning requires suppressing pre-trained associations (e.g., accepting"Lava is Safe"). Our analysis attributes this to natural language encoding, which entangles descriptive semantics and logical rules, leading to persistent hallucinations of familiar physics despite explicit contradictory rules. Here we show that representing dynamics as executable code, rather than descriptive text, reverses this trend and enables effective prior inhibition. We introduce Code-Grounded Vistas (LCV), which fine-tunes models on counterfactual pairs and identifies states with contradictory rules, thereby forcing attention to logical constraints rather than visual semantics. This training-time approach outperforms expensive inference-time search methods in both efficiency and accuracy. Our results demonstrate that representation fundamentally determines whether scaling improves or impairs contextual reasoning. This challenges the assumption that larger models are universally better, with implications for domains that require dynamic overriding of learned priors.
Problem

Research questions and friction points this paper is trying to address.

Semantic Inertia
Prior Inhibition
Contextual Reasoning
Dynamic Rules
LLM Scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Inertia
Code-Grounded Reasoning
Inverse Scaling
Executable Code Representation
Prior Inhibition
๐Ÿ”Ž Similar Papers
No similar papers found.