Toward Culturally Aligned LLMs through Ontology-Guided Multi-Agent Reasoning

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the cultural misalignment often exhibited by large language models in culturally sensitive decision-making, stemming from skewed pretraining data and the absence of structured value representations. To mitigate this, the authors propose the OG-MAR framework, which uniquely integrates a cultural ontology derived from the World Values Survey with a multi-agent collaborative reasoning mechanism. By dynamically generating value-aligned agents based on demographic attributes, OG-MAR enables interpretable, culturally grounded reasoning. The approach synergistically combines ontology construction, competency-question-guided relation extraction, and retrieval-augmented generation, achieving significant improvements in cultural alignment, robustness, and reasoning transparency across four mainstream large language models and multiple regional social survey benchmarks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) increasingly support culturally sensitive decision making, yet often exhibit misalignment due to skewed pretraining data and the absence of structured value representations. Existing methods can steer outputs, but often lack demographic grounding and treat values as independent, unstructured signals, reducing consistency and interpretability. We propose OG-MAR, an Ontology-Guided Multi-Agent Reasoning framework. OG-MAR summarizes respondent-specific values from the World Values Survey (WVS) and constructs a global cultural ontology by eliciting relations over a fixed taxonomy via competency questions. At inference time, it retrieves ontology-consistent relations and demographically similar profiles to instantiate multiple value-persona agents, whose outputs are synthesized by a judgment agent that enforces ontology consistency and demographic proximity. Experiments on regional social-survey benchmarks across four LLM backbones show that OG-MAR improves cultural alignment and robustness over competitive baselines, while producing more transparent reasoning traces.
Problem

Research questions and friction points this paper is trying to address.

cultural alignment
large language models
value representation
ontology
demographic grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

ontology-guided reasoning
multi-agent reasoning
cultural alignment
value persona
World Values Survey
🔎 Similar Papers
No similar papers found.
Wonduk Seo
Wonduk Seo
PKU Alumni; Enhans
Machine LearningText MiningInformation RetrievalSocial ComputingBioinformatics
Wonseok Choi
Wonseok Choi
PhD Student, POSTECH
vision language modelmodel evaluationcomputer vision
J
Junseo Koh
Department of Information Management, Peking University, Beijing, China
Juhyeon Lee
Juhyeon Lee
Peking University
LLM
H
Hyunjin An
AI Research, Enhans, Seoul, South Korea
M
Minhyeong Yu
AI Research, Enhans, Seoul, South Korea
J
Jian Park
Department of Data Science, Fudan University, Shanghai, China
Q
Qingshan Zhou
Department of Information Management, Peking University, Beijing, China
S
Seunghyun Lee
AI Research, Enhans, Seoul, South Korea
Yi Bu
Yi Bu
Assistant Professor, Department of Information Management, Peking University
scholarly communicationbibliometricsscience policyscience of scienceinnovation