Multi-Agent LLMs as Ethics Advocates in AI-Based Systems

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the frequent neglect of ethical considerations during AI system requirements elicitation and the reliance on time-intensive manual collaboration. To tackle these challenges, we propose a multi-agent large language model framework integrating an Ethical Advocate Agent—a role-based agent that automatically performs critical ethical analysis on system descriptions and generates actionable ethical requirements, thereby enhancing both efficiency and coverage of ethical alignment in requirements engineering. In two cross-domain case studies, our approach—without human intervention—identified most ethically relevant requirements previously uncovered via expert interviews within 30 minutes, while also proposing several novel, well-justified items, demonstrating its effectiveness and generalizability. Our core contributions are threefold: (1) the first systematic integration of role-based ethical agents into the requirements elicitation process; (2) the operationalization of ethical sensitivity through early-stage, automated, and scalable ethical reasoning; and (3) the explicit recognition of human feedback as essential for ensuring the reliability and contextual validity of generated ethical recommendations.

Technology Category

Application Category

📝 Abstract
Incorporating ethics into the requirement elicitation process is essential for creating ethically aligned systems. Although eliciting manual ethics requirements is effective, it requires diverse input from multiple stakeholders, which can be challenging due to time and resource constraints. Moreover, it is often given a low priority in the requirements elicitation process. This study proposes a framework for generating ethics requirements drafts by introducing an ethics advocate agent in a multi-agent LLM setting. This agent critiques and provides input on ethical issues based on the system description. The proposed framework is evaluated through two case studies from different contexts, demonstrating that it captures the majority of ethics requirements identified by researchers during 30-minute interviews and introduces several additional relevant requirements. However, it also highlights reliability issues in generating ethics requirements, emphasizing the need for human feedback in this sensitive domain. We believe this work can facilitate the broader adoption of ethics in the requirements engineering process, ultimately leading to more ethically aligned products.
Problem

Research questions and friction points this paper is trying to address.

Incorporating ethics into AI system requirements efficiently
Overcoming challenges of manual ethics requirement elicitation
Ensuring reliability in AI-generated ethics requirements drafts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent LLM framework for ethics requirements
Ethics advocate agent critiques system descriptions
Generates drafts with human feedback necessity
🔎 Similar Papers
No similar papers found.