A Browser-based Open Source Assistant for Multimodal Content Verification

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the growing challenge of misinformation amplified by generative AI and the limited accessibility of existing NLP tools for non-expert users, which often lack integrated workflows. To bridge this gap, the paper proposes an open-source browser extension that seamlessly embeds multimodal content verification capabilities into everyday web browsing. The system enables users to submit URLs or media files, automatically extracts textual and multimodal content, and analyzes credibility and AI-generation likelihood using a backend ensemble of NLP classifiers. It delivers intuitive, actionable feedback directly within the browsing environment. As the core component of the VERIFICATION PLUGIN, this tool has already served over 140,000 users, demonstrating real-world effectiveness in helping non-technical individuals identify both deceptive and AI-generated content.

Technology Category

Application Category

📝 Abstract
Disinformation and false content produced by generative AI pose a significant challenge for journalists and fact-checkers who must rapidly verify digital media information. While there is an abundance of NLP models for detecting credibility signals such as persuasion techniques, subjectivity, or machine-generated text, such methods often remain inaccessible to non-expert users and are not integrated into their daily workflows as a unified framework. This paper demonstrates the VERIFICATION ASSISTANT, a browser-based tool designed to bridge this gap. The VERIFICATION ASSISTANT, a core component of the widely adopted VERIFICATION PLUGIN (140,000+ users), allows users to submit URLs or media files to a unified interface. It automatically extracts content and routes it to a suite of backend NLP classifiers, delivering actionable credibility signals, estimating AI-generated content, and providing other verification guidance in a clear, easy-to-digest format. This paper showcases the tool architecture, its integration of multiple NLP services, and its real-world application to detecting disinformation.
Problem

Research questions and friction points this paper is trying to address.

disinformation
multimodal content verification
generative AI
fact-checking
NLP models
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal verification
browser-based tool
NLP integration
AI-generated content detection
disinformation detection
🔎 Similar Papers
No similar papers found.
R
Rosanna Milner
University of Sheffield
Michael Foster
Michael Foster
University Of Sheffield
Software EngineeringTestingComputational modelsModel Inference
O
Olesya Razuvayevskaya
University of Sheffield
Ian Roberts
Ian Roberts
LSHTM
Clinical trials
V
Valentin Porcellini
AFP Medialab
D
Denis Teyssou
AFP Medialab
Kalina Bontcheva
Kalina Bontcheva
Professor of Text Analytics, University of Sheffield
Natural Language Processing