Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether language models (LMs) can reliably classify event modality—e.g., *possible*, *impossible*, or *nonsensical*—and whether such classifications align with human intuitions. Method: Leveraging mechanistic interpretability, we conduct cross-layer and cross-training-step analyses of neural activations to identify and track latent representations underlying modality judgments. Contribution/Results: We discover, for the first time, a linear modal difference vector that robustly encodes the *possible vs. impossible* distinction; this vector evolves consistently across model scales and training stages. Its direction strongly predicts fine-grained human ratings of event plausibility and comprehensibility. These findings reveal an interpretable, stable internal mechanism for modality classification in LMs—demonstrating representational capacity exceeding prior expectations—and provide novel evidence for how large models ground semantic reasoning in distributed neural representations.

Technology Category

Application Category

📝 Abstract
Language models (LMs) are used for a diverse range of tasks, from question answering to writing fantastical stories. In order to reliably accomplish these tasks, LMs must be able to discern the modal category of a sentence (i.e., whether it describes something that is possible, impossible, completely nonsensical, etc.). However, recent studies have called into question the ability of LMs to categorize sentences according to modality (Michaelov et al., 2025; Kauf et al., 2023). In this work, we identify linear representations that discriminate between modal categories within a variety of LMs, or modal difference vectors. Analysis of modal difference vectors reveals that LMs have access to more reliable modal categorization judgments than previously reported. Furthermore, we find that modal difference vectors emerge in a consistent order as models become more competent (i.e., through training steps, layers, and parameter count). Notably, we find that modal difference vectors identified within LM activations can be used to model fine-grained human categorization behavior. This potentially provides a novel view into how human participants distinguish between modal categories, which we explore by correlating projections along modal difference vectors with human participants' ratings of interpretable features. In summary, we derive new insights into LM modal categorization using techniques from mechanistic interpretability, with the potential to inform our understanding of modal categorization in humans.
Problem

Research questions and friction points this paper is trying to address.

Assess LMs' ability to discern sentence modal categories.
Identify linear representations for modal category discrimination.
Explore LM-human alignment in modal categorization judgments.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear representations discriminate modal categories
Modal difference vectors model human categorization
Mechanistic interpretability reveals LM modal insights
🔎 Similar Papers
No similar papers found.