POLAR: A Benchmark for Multilingual, Multicultural, and Multi-Event Online Polarization

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing online polarization research is constrained by monolingual, monocultural, or single-event perspectives, lacking comprehensive benchmark datasets that span multiple languages, cultures, and real-world events. Method: We construct the first multilingual, multicultural, multi-event online polarization benchmark—comprising over 23k samples across seven languages—equipped with a novel three-dimensional fine-grained annotation schema (existence, type, and manifestation). Our methodology integrates human annotation, fine-tuning of six multilingual pretrained models (monolingual and cross-lingual variants), and evaluation using open- and closed-source large language models (few-shot and zero-shot settings). Contribution/Results: Experiments reveal that current models perform robustly on binary polarization detection but degrade significantly in predicting polarization type and manifestation, underscoring strong contextual dependency. To our knowledge, this is the first unified annotation framework supporting multilingual, multicultural, and multi-event polarization analysis. All data and code are publicly released.

Technology Category

Application Category

📝 Abstract
Online polarization poses a growing challenge for democratic discourse, yet most computational social science research remains monolingual, culturally narrow, or event-specific. We introduce POLAR, a multilingual, multicultural, and multievent dataset with over 23k instances in seven languages from diverse online platforms and real-world events. Polarization is annotated along three axes: presence, type, and manifestation, using a variety of annotation platforms adapted to each cultural context. We conduct two main experiments: (1) we fine-tune six multilingual pretrained language models in both monolingual and cross-lingual setups; and (2) we evaluate a range of open and closed large language models (LLMs) in few-shot and zero-shot scenarios. Results show that while most models perform well on binary polarization detection, they achieve substantially lower scores when predicting polarization types and manifestations. These findings highlight the complex, highly contextual nature of polarization and the need for robust, adaptable approaches in NLP and computational social science. All resources will be released to support further research and effective mitigation of digital polarization globally.
Problem

Research questions and friction points this paper is trying to address.

Addressing monolingual and culturally narrow polarization research limitations
Creating a multilingual dataset for diverse online polarization analysis
Evaluating model performance on polarization detection and contextual nuances
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual dataset with 23k instances
Fine-tuned multilingual pretrained language models
Evaluated LLMs in few-shot and zero-shot scenarios
🔎 Similar Papers
No similar papers found.