🤖 AI Summary
In the digital era, social media has become a central arena for public discourse; however, technology-driven disinformation—including bots, sockpuppet accounts, and deepfakes—severely undermines the credibility of scientific sources and erodes societal resilience. To address this, we systematically establish “social media security” as an emerging interdisciplinary field: first formally defining its disciplinary boundaries and core paradigms; then proposing a comprehensive theoretical framework encompassing attack detection, evaluation methodologies, challenge taxonomy, and evolutionary trajectories. Our approach integrates graph neural networks, behavioral temporal modeling, multimodal fake content detection, adversarial robustness assessment, and human-AI collaborative verification. We further develop standardized evaluation benchmarks and reproducible detection pipelines. Key findings identify two critical bottlenecks: insufficient cross-platform generalizability and weak detection capability against deep semantic manipulation. This work provides both theoretical foundations and technical pathways for trustworthy AI governance.
📝 Abstract
In today's digital era, the Internet, especially social media platforms, plays a significant role in shaping public opinions, attitudes, and beliefs. Unfortunately, the credibility of scientific information sources is often undermined by the spread of misinformation through various means, including technology-driven tools like bots, cyborgs, trolls, sock-puppets, and deep fakes. This manipulation of public discourse serves antagonistic business agendas and compromises civil society. In response to this challenge, a new scientific discipline has emerged: social cybersecurity.