Embracing Dialectic Intersubjectivity: Coordination of Different Perspectives in Content Analysis with LLM Persona Simulation

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the inherent bias and low epistemic reliability of AI in analyzing politically slanted media coverage. We propose a “dialectical intersubjectivity” framework that replaces the pursuit of single-coder consensus with collaborative sentiment analysis conducted by six partisan persona agents—implemented via GPT-4o and calibrated to ideological archetypes (e.g., Fox News– and MSNBC-aligned stances). Using ideological alignment assessment, cross-persona intercoder reliability testing, and comparative content analysis, we find that ideologically congruent persona agents exhibit stronger systematic bias toward stance-congruent texts, yet achieve significantly higher internal reliability than cross-ideological combinations. This approach is the first to formalize ideological divergence as a set of cooperative analytical agents, revealing large language models’ selective processing mechanisms in political contexts. It substantially enhances transparency, interpretability, and methodological rigor in AI-assisted social science research.

Technology Category

Application Category

📝 Abstract
This study attempts to advancing content analysis methodology from consensus-oriented to coordination-oriented practices, thereby embracing diverse coding outputs and exploring the dynamics among differential perspectives. As an exploratory investigation of this approach, we evaluate six GPT-4o configurations to analyze sentiment in Fox News and MSNBC transcripts on Biden and Trump during the 2020 U.S. presidential campaign, examining patterns across these models. By assessing each model's alignment with ideological perspectives, we explore how partisan selective processing could be identified in LLM-Assisted Content Analysis (LACA). Findings reveal that partisan persona LLMs exhibit stronger ideological biases when processing politically congruent content. Additionally, intercoder reliability is higher among same-partisan personas compared to cross-partisan pairs. This approach enhances the nuanced understanding of LLM outputs and advances the integrity of AI-driven social science research, enabling simulations of real-world implications.
Problem

Research questions and friction points this paper is trying to address.

Artificial Intelligence
Bias Detection
Political Media Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse Perspectives Simulation
Political Bias Analysis
GPT-4o Configurations
🔎 Similar Papers
No similar papers found.
T
Taewoo Kang
Department of Media and Information, Michigan State University
Kjerstin Thorson
Kjerstin Thorson
College of Liberal Arts, Colorado State University
Tai-Quan Peng
Tai-Quan Peng
Professor of Communication, Michigan State University
communicationcomputational social sciencesocial mediamobile media
D
Dan Hiaeshutter-Rice
Department of Advertising and Public Relations, Michigan State University
S
Sanguk Lee
Department of Communication Studies, Texas Christian University
S
Stuart N. Soroka
Departments of Communication and Political Science, University of California, Los Angeles