🤖 AI Summary
This work investigates the capacity of large language models (LLMs) to construct and maintain an implicit “local world model” in dialogue—i.e., to encode, track, and dynamically update entities, coreferences, and evolving states. To this end, we introduce PragWorld, a benchmark built upon seven minimal linguistic perturbations applied to dialogues, coupled with two binary question-answering tasks designed to systematically evaluate world modeling robustness across open- and closed-weight models. We further propose a dual-perspective interpretability framework that identifies Transformer layers facilitating or hindering world modeling, enabling layer-wise regularization for fine-tuning. Experiments reveal substantial performance degradation under subtle linguistic perturbations, confirming fragility in state tracking and referential coherence. Our method significantly improves memory retention of critical entities and robustness in state reasoning. Notably, it achieves the first layer-level, interpretability-driven optimization explicitly targeting dialogue-based world modeling.
📝 Abstract
Real-world conversations are rich with pragmatic elements, such as entity mentions, references, and implicatures. Understanding such nuances is a requirement for successful natural communication, and often requires building a local world model which encodes such elements and captures the dynamics of their evolving states. However, it is not well-understood whether language models (LMs) construct or maintain a robust implicit representation of conversations. In this work, we evaluate the ability of LMs to encode and update their internal world model in dyadic conversations and test their malleability under linguistic alterations. To facilitate this, we apply seven minimal linguistic alterations to conversations sourced from popular datasets and construct two benchmarks comprising yes-no questions. We evaluate a wide range of open and closed source LMs and observe that they struggle to maintain robust accuracy. Our analysis unveils that LMs struggle to memorize crucial details, such as tracking entities under linguistic alterations to conversations. We then propose a dual-perspective interpretability framework which identifies transformer layers that are useful or harmful and highlights linguistic alterations most influenced by harmful layers, typically due to encoding spurious signals or relying on shortcuts. Inspired by these insights, we propose two layer-regularization based fine-tuning strategies that suppress the effect of the harmful layers.