Evaluating and Improving Cultural Awareness of Reward Models for LLM Alignment

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reward models (RMs) lack effective means to evaluate and improve large language models’ (LLMs) cross-cultural alignment, primarily due to the scarcity of culturally aware evaluation data and RMs’ tendency to rely on spurious surface-level statistical correlations. Method: We introduce CARB—the first multilingual reward modeling benchmark covering 10 cultures—to systematically expose RMs’ shallow feature dependence in cultural reasoning. We propose Think-as-Locals, a generative reward modeling framework guided by localized reasoning, integrated with reinforcement learning from verifiable rewards (RLVR), structured evaluation criterion generation, and culture-embedded analysis. Contribution/Results: Experiments demonstrate that our framework significantly enhances RMs’ deep cultural understanding in cross-cultural alignment tasks, reduces cultural misjudgment rates, and establishes a reproducible, scalable paradigm for evaluating and optimizing culturally intelligent alignment.

Technology Category

Application Category

📝 Abstract
Reward models (RMs) are crucial for aligning large language models (LLMs) with diverse cultures. Consequently, evaluating their cultural awareness is essential for further advancing global alignment of LLMs. However, existing RM evaluations fall short in assessing cultural awareness due to the scarcity of culturally relevant evaluation datasets. To fill this gap, we propose Cultural Awareness Reward modeling Benchmark (CARB), covering 10 distinct cultures across 4 cultural domains. Our extensive evaluation of state-of-the-art RMs reveals their deficiencies in modeling cultural awareness and demonstrates a positive correlation between performance on CARB and downstream multilingual cultural alignment tasks. Further analysis identifies the spurious correlations within culture-aware reward modeling, wherein RM's scoring relies predominantly on surface-level features rather than authentic cultural nuance understanding. To address these, we propose Think-as-Locals to elicit deeper culturally grounded reasoning from generative RMs via reinforcement learning from verifiable rewards (RLVR) and employ well-designed rewards to ensure accurate preference judgments and high-quality structured evaluation criteria generation. Experimental results validate its efficacy in mitigating spurious features interference and advancing culture-aware reward modeling.
Problem

Research questions and friction points this paper is trying to address.

Evaluating cultural awareness deficiencies in reward models for LLMs
Addressing spurious correlations in culture-aware reward modeling
Improving cultural reasoning via reinforcement learning with verifiable rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed Cultural Awareness Reward modeling Benchmark CARB
Introduced Think-as-Locals with reinforcement learning from verifiable rewards
Employed well-designed rewards for accurate cultural preference judgments
🔎 Similar Papers
No similar papers found.