🤖 AI Summary
Existing reward models (RMs) lack effective means to evaluate and improve large language models’ (LLMs) cross-cultural alignment, primarily due to the scarcity of culturally aware evaluation data and RMs’ tendency to rely on spurious surface-level statistical correlations. Method: We introduce CARB—the first multilingual reward modeling benchmark covering 10 cultures—to systematically expose RMs’ shallow feature dependence in cultural reasoning. We propose Think-as-Locals, a generative reward modeling framework guided by localized reasoning, integrated with reinforcement learning from verifiable rewards (RLVR), structured evaluation criterion generation, and culture-embedded analysis. Contribution/Results: Experiments demonstrate that our framework significantly enhances RMs’ deep cultural understanding in cross-cultural alignment tasks, reduces cultural misjudgment rates, and establishes a reproducible, scalable paradigm for evaluating and optimizing culturally intelligent alignment.
📝 Abstract
Reward models (RMs) are crucial for aligning large language models (LLMs) with diverse cultures. Consequently, evaluating their cultural awareness is essential for further advancing global alignment of LLMs. However, existing RM evaluations fall short in assessing cultural awareness due to the scarcity of culturally relevant evaluation datasets. To fill this gap, we propose Cultural Awareness Reward modeling Benchmark (CARB), covering 10 distinct cultures across 4 cultural domains. Our extensive evaluation of state-of-the-art RMs reveals their deficiencies in modeling cultural awareness and demonstrates a positive correlation between performance on CARB and downstream multilingual cultural alignment tasks. Further analysis identifies the spurious correlations within culture-aware reward modeling, wherein RM's scoring relies predominantly on surface-level features rather than authentic cultural nuance understanding. To address these, we propose Think-as-Locals to elicit deeper culturally grounded reasoning from generative RMs via reinforcement learning from verifiable rewards (RLVR) and employ well-designed rewards to ensure accurate preference judgments and high-quality structured evaluation criteria generation. Experimental results validate its efficacy in mitigating spurious features interference and advancing culture-aware reward modeling.