LLM Harms: A Taxonomy and Discussion

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses systemic risks across the full lifecycle of large language models (LLMs). To tackle the fragmentation in current LLM governance, we propose a unified harm taxonomy spanning five phases: pre-development, output generation, misuse and malicious application, downstream integration, and post-deployment adaptation. Methodologically, we design an interpretable and auditable LLM governance framework integrating dynamic domain-adaptive auditing, standardized risk mitigation protocols, and joint transparency-accountability modeling. Our key contribution is the first end-to-end risk management闭环—encompassing identification, assessment, and response—systematically aligned with the LLM lifecycle. The framework demonstrates strong practicality, validated across diverse real-world deployment scenarios. Results show significant improvements in deployment safety, trustworthiness, and societal adaptability of LLMs. This work provides a structured, scalable governance pathway for responsible AI development. (149 words)

Technology Category

Application Category

📝 Abstract
This study addresses categories of harm surrounding Large Language Models (LLMs) in the field of artificial intelligence. It addresses five categories of harms addressed before, during, and after development of AI applications: pre-development, direct output, Misuse and Malicious Application, and downstream application. By underscoring the need to define risks of the current landscape to ensure accountability, transparency and navigating bias when adapting LLMs for practical applications. It proposes mitigation strategies and future directions for specific domains and a dynamic auditing system guiding responsible development and integration of LLMs in a standardized proposal.
Problem

Research questions and friction points this paper is trying to address.

Categorizes harms from LLMs across development stages.
Addresses risks like bias, misuse, and accountability in applications.
Proposes mitigation strategies and auditing for responsible LLM integration.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes mitigation strategies for LLM harms
Suggests dynamic auditing system for responsible development
Provides standardized proposal for LLM integration
🔎 Similar Papers
No similar papers found.