A Universal Vibe? Finding and Controlling Language-Agnostic Informal Register with SAEs

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether multilingual large language models encode culturally specific informal registers—such as slang—as language-agnostic abstract concepts. By applying sparse autoencoders (SAEs) to analyze internal representations of Gemma-2-9B-IT across English, Greek, and Russian, and leveraging a newly constructed polysemous word context dataset to disentangle pragmatic style from lexical semantics, the authors conduct cross-lingual representational geometry analyses. The work provides the first mechanistic evidence that deep model layers develop a geometrically consistent cross-lingual subspace for informal register. Crucially, interventions targeting activations within this subspace enable causal control over formality in the source language and achieve zero-shot transfer of register manipulation to six unseen languages.
📝 Abstract
While multilingual language models successfully transfer factual and syntactic knowledge across languages, it remains unclear whether they process culture-specific pragmatic registers, such as slang, as isolated language-specific memorizations or as unified, abstract concepts. We study this by probing the internal representations of Gemma-2-9B-IT using Sparse Autoencoders (SAEs) across three typologically diverse source languages: English, Hebrew, and Russian. To definitively isolate pragmatic register processing from trivial lexical sensitivity, we introduce a novel dataset in which every target term is polysemous, appearing in both literal and informal contexts. We find that while much of the informal-register signal is distributed across language-specific features, a small but highly robust cross-linguistic core consistently emerges. This shared core forms a geometrically coherent ``informal register subspace'' that sharpens in the model's deeper layers. Crucially, these shared representations are not merely correlational: activation steering with these features causally shifts output formality across all source languages and transfers zero-shot to six unseen languages spanning diverse language families and scripts. Together, these results provide the first mechanistic evidence that multilingual LLMs internalize informal register not just as surface-level heuristics, but as a portable, language-agnostic pragmatic abstraction.
Problem

Research questions and friction points this paper is trying to address.

pragmatic register
informal language
multilingual LLMs
language-agnostic representation
cross-linguistic abstraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

language-agnostic pragmatic abstraction
sparse autoencoders
informal register subspace
activation steering
cross-linguistic generalization
🔎 Similar Papers
No similar papers found.