Towards autonomous normative multi-agent systems for Human-AI software engineering teams

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low human–AI collaboration efficiency and the lack of normative foundations and trustworthiness in multi-agent coordination within software engineering. Methodologically, we propose a Normative Autonomous Multi-Agent System (NAMAS) spanning the entire software lifecycle, integrating large language models (LLMs) with the Belief–Desire–Intention (BDI) architecture to endow agents with human-like reasoning capabilities. We innovatively incorporate deontic modalities—commitment, obligation, prohibition, and permission—to establish a dynamic, norm-governed collaboration framework; further enhanced by memory-augmented reasoning and collaborative planning to ensure explainable and verifiable agent interactions. Experimental evaluation demonstrates significant improvements across design, development, testing, and deployment phases—in terms of development velocity, system reliability, and environmental adaptability. To our knowledge, NAMAS is the first approach to realize a compliant, human–AI symbiotic software engineering paradigm supporting dynamic, self-adaptive decision-making.

Technology Category

Application Category

📝 Abstract
This paper envisions a transformative paradigm in software engineering, where Artificial Intelligence, embodied in fully autonomous agents, becomes the primary driver of the core software development activities. We introduce a new class of software engineering agents, empowered by Large Language Models and equipped with beliefs, desires, intentions, and memory to enable human-like reasoning. These agents collaborate with humans and other agents to design, implement, test, and deploy software systems with a level of speed, reliability, and adaptability far beyond the current software development processes. Their coordination and collaboration are governed by norms expressed as deontic modalities - commitments, obligations, prohibitions and permissions - that regulate interactions and ensure regulatory compliance. These innovations establish a scalable, transparent and trustworthy framework for future Human-AI software engineering teams.
Problem

Research questions and friction points this paper is trying to address.

Develop autonomous AI agents for software engineering tasks
Enable human-like reasoning in agents using LLMs and BDI models
Govern multi-agent collaboration with normative frameworks for compliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomous agents using Large Language Models for reasoning
Norms governing collaboration via deontic modalities
Scalable framework for Human-AI software engineering teams
H
H. Dam
Decision Systems Lab, University of Wollongong, Australia
G
Geeta Mahala
Decision Systems Lab, University of Wollongong, Australia
Rashina Hoda
Rashina Hoda
Professor of Software Engineering, Faculty of Information Technology, Monash University, Australia
Agile Software DevelopmentAgile Project ManagementGrounded TheoryHuman AspectsSE4AI
X
Xi Zheng
Macquarie University, Australia
Cristina Conati
Cristina Conati
University of British Columbia, Canada