Balancing Engagement and Polarization: Multi-Objective Alignment of News Content Using LLMs

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM)-generated news risks exacerbating media polarization, as optimizing for user engagement often drives output polarization beyond editorial thresholds. Method: We propose a controllable content generation framework that jointly optimizes user engagement and editorial stance alignment. To this end, we introduce Multi-Objective Direct Preference Optimization (MODPO), the first method integrating multi-objective optimization into the DPO paradigm, augmented with a polarization-sensitive feature intervention mechanism. Results: Evaluated on open-source LLMs using *The New York Times* corpus, MODPO significantly improves engagement metrics—including click-through rate and dwell time—while strictly constraining output polarization within predefined editorial bounds. This mitigates the risk of algorithmic fragmentation in news ecosystems. Our core contribution is a tunable, interpretable, and editorially compliant multi-objective alignment paradigm for news generation.

Technology Category

Application Category

📝 Abstract
We study how media firms can use LLMs to generate news content that aligns with multiple objectives -- making content more engaging while maintaining a preferred level of polarization/slant consistent with the firm's editorial policy. Using news articles from The New York Times, we first show that more engaging human-written content tends to be more polarizing. Further, naively employing LLMs (with prompts or standard Direct Preference Optimization approaches) to generate more engaging content can also increase polarization. This has an important managerial and policy implication: using LLMs without building in controls for limiting slant can exacerbate news media polarization. We present a constructive solution to this problem based on the Multi-Objective Direct Preference Optimization (MODPO) algorithm, a novel approach that integrates Direct Preference Optimization with multi-objective optimization techniques. We build on open-source LLMs and develop a new language model that simultaneously makes content more engaging while maintaining a preferred editorial stance. Our model achieves this by modifying content characteristics strongly associated with polarization but that have a relatively smaller impact on engagement. Our approach and findings apply to other settings where firms seek to use LLMs for content creation to achieve multiple objectives, e.g., advertising and social media.
Problem

Research questions and friction points this paper is trying to address.

Balancing engagement and polarization in news content
Using LLMs to align content with multiple objectives
Preventing increased polarization while enhancing engagement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses MODPO for multi-objective content alignment
Modifies content to control polarization and engagement
Leverages LLMs with editorial stance preservation
M
Mengjie Cheng
Harvard Business School
E
E. Ofek
Harvard Business School
Hema Yoganarasimhan
Hema Yoganarasimhan
University of Washington
Marketing