Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing static, single-turn benchmarks inadequately assess large language models’ fine-grained capabilities in multi-objective software engineering tasks. Method: We propose the first dynamic evaluation framework for multi-constraint programming, modeling task structure via a requirement dependency graph and employing a ground-truth-aware “interviewer” LLM to drive progressive, diagnostic dialogues. Our dual-LLM interaction architecture extends the DevAI benchmark into a 55-task dataset with verified ground-truth solutions and expert-validated prompts. Contribution/Results: The framework enables precise identification of model error patterns and collaborative potential—outperforming static baselines in diagnostic granularity and reliability. It establishes a more rigorous, behavior-aware evaluation paradigm for code-generation agents, supporting robust development and iterative improvement of programming-focused foundation models.

Technology Category

Application Category

📝 Abstract
Standard single-turn, static benchmarks fall short in evaluating the nuanced capabilities of Large Language Models (LLMs) on complex tasks such as software engineering. In this work, we propose a novel interactive evaluation framework that assesses LLMs on multi-requirement programming tasks through structured, feedback-driven dialogue. Each task is modeled as a requirement dependency graph, and an ``interviewer'' LLM, aware of the ground-truth solution, provides minimal, targeted hints to an ``interviewee'' model to help correct errors and fulfill target constraints. This dynamic protocol enables fine-grained diagnostic insights into model behavior, uncovering strengths and systematic weaknesses that static benchmarks fail to measure. We build on DevAI, a benchmark of 55 curated programming tasks, by adding ground-truth solutions and evaluating the relevance and utility of interviewer hints through expert annotation. Our results highlight the importance of dynamic evaluation in advancing the development of collaborative code-generating agents.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on multi-requirement software engineering tasks
Assessing nuanced capabilities through interactive feedback-driven dialogue
Uncovering systematic weaknesses static benchmarks fail to measure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive evaluation framework with feedback-driven dialogue
Requirement dependency graph modeling for programming tasks
Interviewer LLM providing targeted hints to interviewee model
🔎 Similar Papers
No similar papers found.
D
Dimitrios Rontogiannis
Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens, Greece
Maxime Peyrard
Maxime Peyrard
Université Grenoble Alpes
NLPMachine LearningData Science
N
Nicolas Baldwin
EPFL
Martin Josifoski
Martin Josifoski
Meta
R
Robert West
EPFL
Dimitrios Gunopulos
Dimitrios Gunopulos
National and Kapodistrian University of Athens
Data MiningData ManagementBig DataMachine LearningSensor Networks