IndiTag: An Online Media Bias Analysis and Annotation System Using Fine-Grained Bias Indicators

📅 2024-03-20
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Amid escalating information overload and media bias, the public urgently requires trustworthy, interpretable bias detection tools. This paper proposes the first dual-path analysis framework integrating large language models (LLMs) with explainable bias indicators to jointly model bias types, intensity, and contextual evidence in news texts. We develop an online analytical system supporting fine-grained, metric-driven automated identification and human-in-the-loop annotation, incorporating vector retrieval and an interactive web interface. Extensive evaluation across four cross-platform news datasets demonstrates the method’s effectiveness. The system’s source code is fully open-sourced, and the platform is publicly accessible. Our core innovation lies in injecting structured bias knowledge into the LLM’s reasoning process—thereby enhancing both predictive accuracy and interpretability without compromising either.

Technology Category

Application Category

📝 Abstract
In the age of information overload and polarized discourse, understanding media bias has become imperative for informed decision-making and fostering a balanced public discourse. This paper presents IndiTag, an innovative online media bias analysis and annotation system that leverages fine-grained bias indicators to dissect and annotate bias in digital content. IndiTag offers a novel approach by incorporating large language models, bias indicator, vector database to automatically detect and interpret bias. Complemented by a user-friendly interface facilitating both automated bias analysis and manual annotation, IndiTag offers a comprehensive platform for in-depth bias examination. We demonstrate the efficacy and versatility of IndiTag through experiments on four datasets encompassing news articles from diverse platforms. Furthermore, we discuss potential applications of IndiTag in fostering media literacy, facilitating fact-checking initiatives, and enhancing the transparency and accountability of digital media platforms. IndiTag stands as a valuable tool in the pursuit of fostering a more informed, discerning, and inclusive public discourse in the digital age. The demonstration video can be accessed from https://youtu.be/Gt2T4T7DYqs. We release an online system for end users and the source code is available at https://github.com/lylin0/IndiTag.
Problem

Research questions and friction points this paper is trying to address.

Automatically detecting bias in news articles
Providing fine-grained bias indicators for readers
Enhancing media literacy through automated analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages large language models for bias detection
Uses fine-grained bias indicators for analysis
Incorporates vector database for automated interpretation
🔎 Similar Papers
No similar papers found.
L
Luyang Lin
The Chinese University of Hong Kong, China; MoE Key Laboratory of High Confidence Software Technologies, China
Lingzhi Wang
Lingzhi Wang
Associate Professor, Harbin Institute of Technology, Shenzhen
Artificial IntelligenceInformation SecurityNLPSocial Media Analysis
J
Jinsong Guo
University College London, UK
J
Jing Li
Department of Computing, The Hong Kong Polytechnic University, China
K
Kam-Fai Wong
The Chinese University of Hong Kong, China; MoE Key Laboratory of High Confidence Software Technologies, China