🤖 AI Summary
Large multimodal models (LMMs) incur prohibitive computational costs in software engineering, and prior work largely overlooks the complementary strengths across diverse coding LLMs. Method: This paper systematically evaluates complementarity among 10 coding LLMs from five model families on code generation and program repair across three benchmark datasets, comparing three ensemble strategies. Contribution/Results: We find consensus-based selection suffers from a “popularity trap,” whereas diversity-aware strategies approach the theoretical performance upper bound (95% of optimal). A lightweight two-model ensemble already yields significant accuracy gains. We propose a multi-model ensemble framework integrating heuristic selection with empirical evaluation to quantify both model complementarity and output consistency. Experimental results show the ensemble’s theoretical upper bound exceeds that of the best individual model by 83%, empirically validating diversity-driven ensembling as a key pathway to enhanced efficacy.
📝 Abstract
Today's pursuit of a single Large Language Model (LMM) for all software engineering tasks is resource-intensive and overlooks the potential benefits of complementarity, where different models contribute unique strengths. However, the degree to which coding LLMs complement each other and the best strategy for maximizing an ensemble's potential are unclear, leaving practitioners without a clear path to move beyond single-model systems.
To address this gap, we empirically compare ten individual LLMs from five families, and three ensembles of these LLMs across three software engineering benchmarks covering code generation and program repair. We assess the complementarity between models and the performance gap between the best individual model and the ensembles. Next, we evaluate various selection heuristics to identify correct solutions from an ensemble's candidate pool.
We find that the theoretical upperbound for an ensemble's performance can be 83% above the best single model. Our results show that consensus-based strategies for selecting solutions fall into a "popularity trap," amplifying common but incorrect outputs. In contrast, a diversity-based strategy realizes up to 95% of this theoretical potential, and proves effective even in small two-model ensembles, enabling a cost-efficient way to enhance performance by leveraging multiple LLMs.