OPOR-Bench: Evaluating Large Language Models on Online Public Opinion Report Generation

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack a formal task definition, standardized benchmark dataset, and reliable evaluation methodology for automated online public opinion report generation. Method: This work introduces—*for the first time*—the task of “Automated Online Public Opinion Report Generation,” constructs the first event-centric, multi-source benchmark dataset comprising 463 real-world incidents drawn from news articles, social media posts, and expert summaries, and proposes a novel agent-based automatic evaluation framework that emulates domain-expert judgment. Contribution/Results: Empirical evaluation demonstrates strong agreement between the proposed framework and human expert ratings (Spearman’s ρ > 0.85). The study establishes a standardized task formulation, provides a high-quality, publicly available dataset, and delivers a reproducible evaluation paradigm—thereby filling a critical gap in systematic LLM research for government and corporate crisis response applications.

Technology Category

Application Category

📝 Abstract
Online Public Opinion Reports consolidate news and social media for timely crisis management by governments and enterprises. While large language models have made automated report generation technically feasible, systematic research in this specific area remains notably absent, particularly lacking formal task definitions and corresponding benchmarks. To bridge this gap, we define the Automated Online Public Opinion Report Generation (OPOR-GEN) task and construct OPOR-BENCH, an event-centric dataset covering 463 crisis events with their corresponding news articles, social media posts, and a reference summary. To evaluate report quality, we propose OPOR-EVAL, a novel agent-based framework that simulates human expert evaluation by analyzing generated reports in context. Experiments with frontier models demonstrate that our framework achieves high correlation with human judgments. Our comprehensive task definition, benchmark dataset, and evaluation framework provide a solid foundation for future research in this critical domain.
Problem

Research questions and friction points this paper is trying to address.

Defines automated online public opinion report generation task
Constructs event-centric benchmark dataset for crisis events
Proposes agent-based framework to evaluate report quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defines OPOR-GEN task for automated opinion report generation
Constructs OPOR-BENCH dataset with 463 crisis events
Proposes OPOR-EVAL agent-based framework for expert simulation
🔎 Similar Papers
No similar papers found.
J
Jinzheng Yu
Communication University of China, Beijing, China
Y
Yang Xu
Harbin Institute of Technology, Harbin, China
H
Haozhen Li
Harbin Institute of Technology, Harbin, China
J
Junqi Li
China Academy of Railway Sciences Corporation Limited, Beijing, China
Yifan Feng
Yifan Feng
Assistant Professor, NUS Business School
learninginformationpreferenceplatform and market
L
Ligu Zhu
Communication University of China, Beijing, China
H
Hao Shen
Communication University of China, Beijing, China
L
Lei Shi
Communication University of China, Beijing, China