🤖 AI Summary
This study systematically investigates the dual-use risks of natural language generation (NLG) technologies—beyond their beneficial applications in human-computer interaction—focusing on potential misuse for disinformation dissemination, social manipulation, and automated deception. Method: We conducted a qualitative survey within the ACL SIGGEN community, engaging 23 NLG researchers, and developed the first domain-specific dual-use risk taxonomy for NLG, identifying three core abuse scenarios; consensus analysis yielded a community-driven governance reference report. Contribution/Results: The work fills an empirical gap in NLP ethics governance for generative AI, providing a publicly accessible, theory-grounded foundation and actionable guidance for technical impact assessment, policy development, and responsible innovation in NLG.
📝 Abstract
This report documents the results of a recent survey in the SIGGEN community, focusing on Dual Use issues in Natural Language Generation (NLG). SIGGEN is the Special Interest Group (SIG) of the Association for Computational Linguistics (ACL) for researchers working on NLG. The survey was prompted by the ACL executive board, which asked all SIGs to provide an overview of dual use issues within their respective subfields. The survey was sent out in October 2024 and the results were processed in January 2025. With 23 respondents, the survey is presumably not representative of all SIGGEN members, but at least this document offers a helpful resource for future discussions. This report is open to feedback from the SIGGEN community. Let me know if you have any questions or comments!