๐ค AI Summary
This work addresses the susceptibility of large language models to semantic inertia when confronted with dynamic rules that contradict pretraining priorsโsuch as the assumption that โlava is dangerous.โ To mitigate this, the authors propose Code-Grounded Vistas, a method that encodes rules as executable code rather than natural language and incorporates counterfactual fine-tuning based on the puzzle game *Baba Is You*. This approach compels the model to prioritize logical constraints over ingrained semantic associations. By effectively decoupling semantics from logic, the method substantially improves reasoning accuracy in scenarios involving conflicting rules. Notably, it outperforms costly test-time search strategies and reverses the inverse scaling phenomenon typically observed in such tasks, thereby underscoring the critical role of representational format in shaping model capabilities.
๐ Abstract
LLMs struggle with Semantic Inertia: the inability to inhibit pre-trained priors (e.g.,"Lava is Dangerous") when dynamic, in-context rules contradict them. We probe this phenomenon using Baba Is You, where physical laws are mutable text rules, enabling precise evaluation of models'ability to override learned priors when rules change. We quantatively observe that larger models can exhibit inverse scaling: they perform worse than smaller models when natural language reasoning requires suppressing pre-trained associations (e.g., accepting"Lava is Safe"). Our analysis attributes this to natural language encoding, which entangles descriptive semantics and logical rules, leading to persistent hallucinations of familiar physics despite explicit contradictory rules. Here we show that representing dynamics as executable code, rather than descriptive text, reverses this trend and enables effective prior inhibition. We introduce Code-Grounded Vistas (LCV), which fine-tunes models on counterfactual pairs and identifies states with contradictory rules, thereby forcing attention to logical constraints rather than visual semantics. This training-time approach outperforms expensive inference-time search methods in both efficiency and accuracy. Our results demonstrate that representation fundamentally determines whether scaling improves or impairs contextual reasoning. This challenges the assumption that larger models are universally better, with implications for domains that require dynamic overriding of learned priors.