🤖 AI Summary
This paper addresses causal confusion in misinformation research arising from conceptual narrowing—specifically, the overemphasis on false content itself rather than underlying sociocognitive dynamics. Method: It proposes “disagreement”—defined as systematic, cross-group conflict in attitudes and beliefs—as the primary analytical unit, integrating social psychology and computational social science through large-scale attitude surveys, controlled experiments, and cross-group belief network modeling. Contribution/Results: First, it operationalizes disagreement as a measurable construct, revealing the structural ineffectiveness of fact-checking interventions in mitigating deep-seated value-based disagreement. Second, it establishes a quantified association framework linking disagreement dimensions to downstream social consequences (e.g., polarization, trust erosion). Third, it provides an empirically grounded pathway for testing “post-truth” claims by shifting focus from veracity assessment to the socio-epistemic conditions sustaining persistent disagreement. Collectively, this approach advances a more rigorous, mechanism-oriented framework for misinformation research.
📝 Abstract
Experts consider misinformation a significant societal concern due to its associated problems like political polarization, erosion of trust, and public health challenges. However, these broad effects can occur independently of misinformation, illustrating a misalignment with the narrow focus of the prevailing misinformation concept. We propose using disagreement—conflicting attitudes and beliefs—as a more effective framework for studying these effects. This approach, for example, reveals the limitations of current misinformation interventions and offers a method to empirically test whether we are living in a post-truth era.