Investigating Political and Demographic Associations in Large Language Models Through Moral Foundations Theory

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit systematic political-ideological bias in moral judgment. Grounded in the five-dimensional Moral Foundations Theory, we design multi-condition controlled experiments: comparing default LLM outputs against responses elicited by explicit political stance prompts and demographic role-playing, and—critically—conducting the first direct, cross-subject quantitative comparison between LLM-generated moral judgments and large-scale empirical human moral judgment data. Results reveal that LLMs exhibit a default liberal-leaning tendency, yet demonstrate high-fidelity simulation of diverse ideological perspectives when explicitly prompted, underscoring contextual responsiveness rather than inherent bias. Our primary contribution is a verifiable, operationalizable framework for assessing ideological representation in LLMs, enabling rigorous, benchmark-aligned evaluation of moral reasoning. This work establishes a methodological foundation and empirical basis for analyzing value alignment mechanisms in foundation models.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become increasingly incorporated into everyday life for many internet users, taking on significant roles as advice givers in the domains of medicine, personal relationships, and even legal matters. The importance of these roles raise questions about how and what responses LLMs make in difficult political and moral domains, especially questions about possible biases. To quantify the nature of potential biases in LLMs, various works have applied Moral Foundations Theory (MFT), a framework that categorizes human moral reasoning into five dimensions: Harm, Fairness, Ingroup Loyalty, Authority, and Purity. Previous research has used the MFT to measure differences in human participants along political, national, and cultural lines. While there has been some analysis of the responses of LLM with respect to political stance in role-playing scenarios, no work so far has directly assessed the moral leanings in the LLM responses, nor have they connected LLM outputs with robust human data. In this paper we analyze the distinctions between LLM MFT responses and existing human research directly, investigating whether commonly available LLM responses demonstrate ideological leanings: either through their inherent responses, straightforward representations of political ideologies, or when responding from the perspectives of constructed human personas. We assess whether LLMs inherently generate responses that align more closely with one political ideology over another, and additionally examine how accurately LLMs can represent ideological perspectives through both explicit prompting and demographic-based role-playing. By systematically analyzing LLM behavior across these conditions and experiments, our study provides insight into the extent of political and demographic dependency in AI-generated responses.
Problem

Research questions and friction points this paper is trying to address.

Quantifying political and moral biases in LLM responses
Assessing moral leanings through Moral Foundations Theory framework
Investigating ideological alignment in LLM outputs and persona representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Moral Foundations Theory to quantify biases
Comparing LLM responses with human moral data
Testing political leanings through role-playing prompts
🔎 Similar Papers
No similar papers found.
N
Nicole Smith-Vaniz
Tulane University, New Orleans, LA, USA
H
Harper Lyon
Tulane University, New Orleans, LA, USA
L
Lorraine Steigner
Tulane University, New Orleans, LA, USA
Ben Armstrong
Ben Armstrong
Tulane University
social choicemachine learningethical artificial intelligencemultiagent systems
Nicholas Mattei
Nicholas Mattei
Associate Professor, Tulane University
Artificial IntelligenceComputational Social ChoiceAlgorithmsPreferencesDecision Making