Wisdom and Delusion of LLM Ensembles for Code Generation and Repair

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) incur prohibitive computational costs in software engineering, and prior work largely overlooks the complementary strengths across diverse coding LLMs. Method: This paper systematically evaluates complementarity among 10 coding LLMs from five model families on code generation and program repair across three benchmark datasets, comparing three ensemble strategies. Contribution/Results: We find consensus-based selection suffers from a “popularity trap,” whereas diversity-aware strategies approach the theoretical performance upper bound (95% of optimal). A lightweight two-model ensemble already yields significant accuracy gains. We propose a multi-model ensemble framework integrating heuristic selection with empirical evaluation to quantify both model complementarity and output consistency. Experimental results show the ensemble’s theoretical upper bound exceeds that of the best individual model by 83%, empirically validating diversity-driven ensembling as a key pathway to enhanced efficacy.

Technology Category

Application Category

📝 Abstract
Today's pursuit of a single Large Language Model (LMM) for all software engineering tasks is resource-intensive and overlooks the potential benefits of complementarity, where different models contribute unique strengths. However, the degree to which coding LLMs complement each other and the best strategy for maximizing an ensemble's potential are unclear, leaving practitioners without a clear path to move beyond single-model systems. To address this gap, we empirically compare ten individual LLMs from five families, and three ensembles of these LLMs across three software engineering benchmarks covering code generation and program repair. We assess the complementarity between models and the performance gap between the best individual model and the ensembles. Next, we evaluate various selection heuristics to identify correct solutions from an ensemble's candidate pool. We find that the theoretical upperbound for an ensemble's performance can be 83% above the best single model. Our results show that consensus-based strategies for selecting solutions fall into a "popularity trap," amplifying common but incorrect outputs. In contrast, a diversity-based strategy realizes up to 95% of this theoretical potential, and proves effective even in small two-model ensembles, enabling a cost-efficient way to enhance performance by leveraging multiple LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating complementarity between coding LLMs for ensemble performance
Identifying optimal selection strategies for ensemble candidate solutions
Assessing performance gap between single models and ensembles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Empirically compares ten individual LLMs across benchmarks
Evaluates selection heuristics for ensemble candidate solutions
Proposes diversity-based strategy to realize ensemble potential
🔎 Similar Papers
No similar papers found.