🤖 AI Summary
This work addresses the pervasive security vulnerabilities in code generated by large language models (LLMs), which are difficult to eliminate through fine-tuning or prompt engineering alone. The authors propose a structured multi-LLM collaboration framework that spans the full lifecycle of vulnerability detection, analysis, and repair, and introduce ten distinct pipeline configurations—including single-model, ensemble, collaborative, and hybrid approaches. Experimental results demonstrate that the proposed hybrid pipeline achieves significant improvements of 47.3% and 26.78% on the SecLLMEval and SecLLMHolmes benchmarks, respectively. These findings underscore that a thoughtfully designed system architecture is more effective at enhancing code security than merely scaling up model size.
📝 Abstract
Automatically generating source code from natural language using large language models (LLMs) is becoming common, yet security vulnerabilities persist despite advances in fine tuning and prompting. In this work, we systematically evaluate whether multi LLM ensembles and collaborative strategies can meaningfully improve secure code generation. We present MULTI-LLMSECCODEEVAL, a framework for assessing and enhancing security across the vulnerability management lifecycle by combining multiple LLMs with static analysis and structured collaboration. Using SecLLMEval and SecLLMHolmes, we benchmark ten pipelines spanning single model, ensemble, collaborative, and hybrid designs. Our results show that ensemble pipelines augmented with static analysis improve secure code generation over single LLM baselines by up to 47.3% on SecLLMEval and 19.3% on SecLLMHolmes, while purely LLM based collaborative pipelines yield smaller gains of 8.9% to 22.3%. Hybrid pipelines that integrate ensembling, detection, and patching achieve the strongest security performance, outperforming the best ensemble baseline by 1.78% to 4.72% and collaborative baselines by 19.81% to 26.78%. Ablation studies reveal that model scale alone does not ensure security. Smaller, structured multi model ensembles consistently outperform large monolithic LLMs. Overall, our findings demonstrate that secure code does not emerge from scale, but from carefully orchestrated multi model system design.