An LLM-based multi-agent framework for agile effort estimation

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Agile effort estimation has long suffered from subjectivity, poor interpretability, and limited human-AI interaction. This paper introduces the first LLM-driven multi-agent collaboration framework specifically designed for effort estimation. It models human negotiation processes via role-specialized agents—such as Developer, Estimator, and Coordinator—that jointly perform natural language reasoning, dynamic dialogue coordination, and consensus-building to produce transparent, interactive, and explainable estimation decisions. To our knowledge, this is the first work to apply multi-agent collaboration to agile estimation. Evaluated on real-world datasets, our approach comprehensively outperforms existing state-of-the-art models. A user study further confirms significant improvements in estimation transparency and team collaboration experience. The framework establishes a novel paradigm for leveraging LLMs in software engineering decision-making, advancing both technical capability and human-centered AI integration.

Technology Category

Application Category

📝 Abstract
Effort estimation is a crucial activity in agile software development, where teams collaboratively review, discuss, and estimate the effort required to complete user stories in a product backlog. Current practices in agile effort estimation heavily rely on subjective assessments, leading to inaccuracies and inconsistencies in the estimates. While recent machine learning-based methods show promising accuracy, they cannot explain or justify their estimates and lack the capability to interact with human team members. Our paper fills this significant gap by leveraging the powerful capabilities of Large Language Models (LLMs). We propose a novel LLM-based multi-agent framework for agile estimation that not only can produce estimates, but also can coordinate, communicate and discuss with human developers and other agents to reach a consensus. Evaluation results on a real-life dataset show that our approach outperforms state-of-the-art techniques across all evaluation metrics in the majority of the cases. Our human study with software development practitioners also demonstrates an overwhelmingly positive experience in collaborating with our agents in agile effort estimation.
Problem

Research questions and friction points this paper is trying to address.

Addresses subjective inaccuracies in agile effort estimation
Leverages LLMs for explainable collaborative estimation framework
Enables multi-agent consensus through human-AI interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based multi-agent framework for estimation
Coordinates and communicates with human developers
Outperforms state-of-the-art techniques across metrics
🔎 Similar Papers
No similar papers found.