🤖 AI Summary
This study addresses three core challenges in social media misinformation: rapid dissemination, cross-lingual/cross-platform evolution, and limitations and misuse risks of large language models (LLMs). We systematically survey state-of-the-art LLM-augmented fake news detection approaches. Methodologically, we propose a novel paradigm for semantic-enhanced and multimodal-robust detection, integrating multimodal features, graph neural networks (GNNs), and adversarial training to enable real-time analysis in dynamic social environments. Our key contributions include: (1) the first systematic identification of ethical gaps in LLM-driven detection—namely generative misuse, bias amplification, and governance deficits; and (2) a three-dimensional evaluation framework spanning technical capability, robustness bottlenecks, and ethical governance. We identify three critical future directions: style-agnostic detection, cross-lingual generalization, and collaborative governance—thereby providing a theoretical foundation and practical roadmap for trustworthy, adaptive, and ethically grounded next-generation fake news detection systems.
📝 Abstract
The pervasiveness of the dissemination of fake news through social media platforms poses critical risks to the trust of the general public, societal stability, and democratic institutions. This challenge calls for novel methodologies in detection, which can keep pace with the dynamic and multi-modal nature of misinformation. Recent works include powering the detection using large language model advances in multimodal frameworks, methodologies using graphs, and adversarial training in the literature of fake news. Based on the different approaches which can bring success, some key highlights will be underlined: enhanced LLM-improves accuracy through more advanced semantics and cross-modality fusion for robust detections. The review further identifies critical gaps in adaptability to dynamic social media trends, real-time, and cross-platform detection capabilities, as well as the ethical challenges thrown up by the misuse of LLMs. Future directions underline the development of style-agnostic models, cross-lingual detection frameworks, and robust policies with a view to mitigating LLM-driven misinformation. This synthesis thus lays a concrete foundation for those researchers and practitioners committed to reinforcing fake news detection systems with complications that keep on growing in the digital landscape.