Steering LLMs for Culturally Localized Generation

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models, when deployed globally, often fail to accurately represent culturally sparse communities due to biases in their training data, while existing localization approaches lack interpretability and controllability. To bridge this gap, the study introduces mechanistic interpretability into cultural representation analysis by leveraging sparse autoencoders to identify and aggregate interpretable features that encode culturally salient information, thereby constructing a controllable Cultural Embedding (CuE). This approach enables white-box interventions that transparently and precisely steer models toward generating culturally faithful content, effectively activating latent but previously dormant long-tail cultural knowledge. Experiments demonstrate that CuE substantially outperforms prompt-only baselines in enhancing cultural expression quality across multiple models and complements black-box methods, with combined use yielding further improvements.

Technology Category

Application Category

📝 Abstract
LLMs are deployed globally, yet produce responses biased towards cultures with abundant training data. Existing cultural localization approaches such as prompting or post-training alignment are black-box, hard to control, and do not reveal whether failures reflect missing knowledge or poor elicitation. In this paper, we address these gaps using mechanistic interpretability to uncover and manipulate cultural representations in LLMs. Leveraging sparse autoencoders, we identify interpretable features that encode culturally salient information and aggregate them into Cultural Embeddings (CuE). We use CuE both to analyze implicit cultural biases under underspecified prompts and to construct white-box steering interventions. Across multiple models, we show that CuE-based steering increases cultural faithfulness and elicits significantly rarer, long-tail cultural concepts than prompting alone. Notably, CuE-based steering is complementary to black-box localization methods, offering gains when applied on top of prompt-augmented inputs. This also suggests that models do benefit from better elicitation strategies, and don't necessarily lack long-tail knowledge representation, though this varies across cultures. Our results provide both diagnostic insight into cultural representations in LLMs and a controllable method to steer towards desired cultures.
Problem

Research questions and friction points this paper is trying to address.

cultural localization
large language models
cultural bias
mechanistic interpretability
cultural representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

mechanistic interpretability
cultural localization
sparse autoencoders
Cultural Embeddings
white-box steering
🔎 Similar Papers
No similar papers found.