π€ AI Summary
This work addresses the vulnerability of collective judgment on online platforms to weak truth signals, noisy feedback, early popularity bias, and strategic manipulation, which hinder the identification of reliable viewpoints. To mitigate these issues, the authors propose a Credibility Governance (CG) mechanism that dynamically evaluates the credibility of both participants and their contributions, linking influence to supportersβ historical performance and rewarding early and sustained alignment with emerging evidence. Integrating dynamic reputation modeling with credibility-weighted endorsements, CG is evaluated within the POLIS socio-physical simulation environment, which models coupled belief evolution and feedback dynamics. Experimental results demonstrate that CG significantly outperforms conventional voting and stake-weighted approaches under conditions of initial misinformation, observational noise, and adversarial disinformation, achieving faster convergence to truth, reduced path dependence, and enhanced robustness against manipulation.
π Abstract
Online platforms increasingly rely on opinion aggregation to allocate real-world attention and resources, yet common signals such as engagement votes or capital-weighted commitments are easy to amplify and often track visibility rather than reliability. This makes collective judgments brittle under weak truth signals, noisy or delayed feedback, early popularity surges, and strategic manipulation. We propose Credibility Governance (CG), a mechanism that reallocates influence by learning which agents and viewpoints consistently track evolving public evidence. CG maintains dynamic credibility scores for both agents and opinions, updates opinion influence via credibility-weighted endorsements, and updates agent credibility based on the long-run performance of the opinions they support, rewarding early and persistent alignment with emerging evidence while filtering short-lived noise. We evaluate CG in POLIS, a socio-physical simulation environment that models coupled belief dynamics and downstream feedback under uncertainty. Across settings with initial majority misalignment, observation noise and contamination, and misinformation shocks, CG outperforms vote-based, stake-weighted, and no-governance baselines, yielding faster recovery to the true state, reduced lock-in and path dependence, and improved robustness under adversarial pressure. Our implementation and experimental scripts are publicly available at https://github.com/Wanying-He/Credibility_Governance.