đ€ AI Summary
In the LLM era, online discourse is increasingly plagued by hate speech proliferation and frequent breakdowns in mutual understandingâthreatening social cohesion and democratic values. To address this, we develop an interdisciplinary analytical framework bridging NLP and social sciences. Methodologically, we propose a novel taxonomy for evaluating dialogue quality; introduce the first classification schema for dialogue guidance datasets encompassing multiple normative objectives; and design, for the first time, LLM-native dialogue guidance techniquesâintegrating human-AI collaborative modeling, prompt engineering, and targeted intervention strategies into a holistic âAssessmentâInterventionâGovernanceâ theoretical pipeline. Our core contribution lies in transcending isolated technical optimizations: we deliver a practical, implementation-ready framework for fostering rational, inclusive dialogue, alongside a transdisciplinary research paradigm that supports the development of resilient, equitable digital public spheres.
đ Abstract
We present a survey of methods for assessing and enhancing the quality of online discussions, focusing on the potential of Large Language Models (LLMs). While online discourses aim, at least in theory, to foster mutual understanding, they often devolve into harmful exchanges, such as hate speech, threatening social cohesion and democratic values. Recent advancements in LLMs enable facilitation agents that not only moderate content, but also actively improve the quality of interactions. Our survey synthesizes ideas from Natural Language Processing (NLP) and Social Sciences to provide (a) a new taxonomy on discussion quality evaluation, (b) an overview of intervention and facilitation strategies, along with a new taxonomy on conversation facilitation datasets, (c) an LLM-oriented roadmap of good practices and future research directions, from technological and societal perspectives.