Credibility Governance: A Social Mechanism for Collective Self-Correction under Weak Truth Signals

πŸ“… 2026-03-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of collective judgment on online platforms to weak truth signals, noisy feedback, early popularity bias, and strategic manipulation, which hinder the identification of reliable viewpoints. To mitigate these issues, the authors propose a Credibility Governance (CG) mechanism that dynamically evaluates the credibility of both participants and their contributions, linking influence to supporters’ historical performance and rewarding early and sustained alignment with emerging evidence. Integrating dynamic reputation modeling with credibility-weighted endorsements, CG is evaluated within the POLIS socio-physical simulation environment, which models coupled belief evolution and feedback dynamics. Experimental results demonstrate that CG significantly outperforms conventional voting and stake-weighted approaches under conditions of initial misinformation, observational noise, and adversarial disinformation, achieving faster convergence to truth, reduced path dependence, and enhanced robustness against manipulation.

Technology Category

Application Category

πŸ“ Abstract
Online platforms increasingly rely on opinion aggregation to allocate real-world attention and resources, yet common signals such as engagement votes or capital-weighted commitments are easy to amplify and often track visibility rather than reliability. This makes collective judgments brittle under weak truth signals, noisy or delayed feedback, early popularity surges, and strategic manipulation. We propose Credibility Governance (CG), a mechanism that reallocates influence by learning which agents and viewpoints consistently track evolving public evidence. CG maintains dynamic credibility scores for both agents and opinions, updates opinion influence via credibility-weighted endorsements, and updates agent credibility based on the long-run performance of the opinions they support, rewarding early and persistent alignment with emerging evidence while filtering short-lived noise. We evaluate CG in POLIS, a socio-physical simulation environment that models coupled belief dynamics and downstream feedback under uncertainty. Across settings with initial majority misalignment, observation noise and contamination, and misinformation shocks, CG outperforms vote-based, stake-weighted, and no-governance baselines, yielding faster recovery to the true state, reduced lock-in and path dependence, and improved robustness under adversarial pressure. Our implementation and experimental scripts are publicly available at https://github.com/Wanying-He/Credibility_Governance.
Problem

Research questions and friction points this paper is trying to address.

credibility governance
collective judgment
weak truth signals
opinion aggregation
strategic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Credibility Governance
collective self-correction
dynamic credibility scoring
evidence-aligned influence
robust opinion aggregation
W
Wanying He
School of Artificial Intelligence, Wuhan University; BIGAI
Y
Yanxi Lin
Tsinghua University
Z
Ziheng Zhou
University of California, Los Angeles
Xue Feng
Xue Feng
Tsinghua University
stretchable and flexible electronicsthin film and interfacesolids mechanics under unusual conditionsexperimental mechanics
M
Min Peng
School of Artificial Intelligence, Wuhan University
Qianqian Xie
Qianqian Xie
Wuhan University
NLPLLM
Z
Zilong Zheng
State Key Laboratory of General Artificial Intelligence, BIGAI
Yipeng Kang
Yipeng Kang
BIGAI
Natural language processing