🤖 AI Summary
This work addresses the cultural misalignment often exhibited by large language models in culturally sensitive decision-making, stemming from skewed pretraining data and the absence of structured value representations. To mitigate this, the authors propose the OG-MAR framework, which uniquely integrates a cultural ontology derived from the World Values Survey with a multi-agent collaborative reasoning mechanism. By dynamically generating value-aligned agents based on demographic attributes, OG-MAR enables interpretable, culturally grounded reasoning. The approach synergistically combines ontology construction, competency-question-guided relation extraction, and retrieval-augmented generation, achieving significant improvements in cultural alignment, robustness, and reasoning transparency across four mainstream large language models and multiple regional social survey benchmarks.
📝 Abstract
Large Language Models (LLMs) increasingly support culturally sensitive decision making, yet often exhibit misalignment due to skewed pretraining data and the absence of structured value representations. Existing methods can steer outputs, but often lack demographic grounding and treat values as independent, unstructured signals, reducing consistency and interpretability. We propose OG-MAR, an Ontology-Guided Multi-Agent Reasoning framework. OG-MAR summarizes respondent-specific values from the World Values Survey (WVS) and constructs a global cultural ontology by eliciting relations over a fixed taxonomy via competency questions. At inference time, it retrieves ontology-consistent relations and demographically similar profiles to instantiate multiple value-persona agents, whose outputs are synthesized by a judgment agent that enforces ontology consistency and demographic proximity. Experiments on regional social-survey benchmarks across four LLM backbones show that OG-MAR improves cultural alignment and robustness over competitive baselines, while producing more transparent reasoning traces.