🤖 AI Summary
This work addresses the lack of a unified evaluation benchmark for software package build repair across heterogeneous instruction set architectures (ISAs) and programming languages. To this end, we present the first standardized benchmark comprising 268 real-world build failure cases spanning multiple ISAs and languages, along with a systematic evaluation protocol. Leveraging this benchmark, we conduct a comprehensive assessment of six state-of-the-art large language models, revealing their limited effectiveness in cross-ISA build repair tasks. Our findings underscore significant shortcomings in current approaches and highlight the technical challenges inherent in this domain. This study establishes a reliable data foundation and evaluation framework to support future research in automated build repair across diverse computational environments.
📝 Abstract
During migration across instruction set architectures (ISAs), software package build repair is a critical task for ensuring the reliability of software deployment and the stability of modern operating systems. While Large Language Models (LLMs) have shown promise in tackling this challenge, prior work has primarily focused on single instruction set architecture (ISA) and homogeneous programming languages. To address this limitation, we introduce a new benchmark designed for software package build repair across diverse architectures and languages. Comprising 268 real-world software package build failures, the benchmark provides a standardized evaluation pipeline. We evaluate six state-of-the-art LLMs on the benchmark, and the results show that cross-ISA software package repair remains difficult and requires further advances. By systematically exposing this challenge, the benchmark establishes a foundation for advancing future methods aimed at improving software portability and bridging architectural gaps.