๐ค AI Summary
This work addresses the limitation of current language models that treat culture as a static backdrop, thereby failing to capture its dynamic and evolving natureโleading to unreliable performance in culturally sensitive tasks. To overcome this, the authors propose a culture self-awareness mechanism that decouples task semantics from explicit and implicit cultural signals, constructing structured cultural clusters. By integrating contrastive learning, cross-attention alignment, Mixture-of-Experts, and reflective self-prompting, the model acquires an updatable internal cultural identity state. This approach enables, for the first time, fine-grained modeling and continuous self-correction of dynamic cultural traits. Empirical results demonstrate significant improvements over state-of-the-art methods across multiple cross-cultural benchmarks, exhibiting enhanced cultural adaptability and task reliability.
๐ Abstract
Cultural awareness in language models is the capacity to understand and adapt to diverse cultural contexts. However, most existing approaches treat culture as static background knowledge, overlooking its dynamic and evolving nature. This limitation reduces their reliability in downstream tasks that demand genuine cultural sensitivity. In this work, we introduce CALM, a novel framework designed to endow language models with cultural self-awareness. CALM disentangles task semantics from explicit cultural concepts and latent cultural signals, shaping them into structured cultural clusters through contrastive learning. These clusters are then aligned via cross-attention to establish fine-grained interactions among related cultural features and are adaptively integrated through a Mixture-of-Experts mechanism along culture-specific dimensions. The resulting unified representation is fused with the model's original knowledge to construct a culturally grounded internal identity state, which is further enhanced through self-prompted reflective learning, enabling continual adaptation and self-correction. Extensive experiments conducted on multiple cross-cultural benchmark datasets demonstrate that CALM consistently outperforms state-of-the-art methods.