🤖 AI Summary
Manual library migration in Python is labor-intensive, and existing automated tools exhibit poor generalizability. Method: This paper presents the first systematic evaluation of large language models (LLMs) on real-world, end-to-end Python library migration tasks. We propose a dual-evaluation framework jointly measuring behavioral correctness (unit test pass rate) and semantic correctness (code similarity against developer-written reference implementations), and introduce PyMigBench—a benchmark comprising 321 real-world migration cases and 2,989 code changes. Leveraging code generation, semantic matching, and test-based validation, we evaluate Llama 3.1, GPT-4o mini, and GPT-4o. Contribution/Results: GPT-4o achieves 94% code-change accuracy and a 64% unit test pass rate—substantially outperforming open-source LLMs. However, our analysis reveals critical limitations in handling implicit library behaviors and complex contextual dependencies, providing empirical foundations and concrete directions for future migration tool design.
📝 Abstract
Library migration is the process of replacing a used software library with another library that provides similar functionality. Manual library migration is time-consuming and error prone, as it requires developers to understand the APIs of both libraries, map them, and perform the necessary code transformations. Due to its difficulty, most of the existing automated techniques and tooling stop at the API mapping stage or support a limited set of code transformations. On the other hand, Large Language Models (LLMs) are good at generating and transforming code and finding similar code, which are necessary upstream tasks for library migration. Such capabilities suggest that LLMs may be suitable for library migration. Therefore, in this paper, we investigate the effectiveness of LLMs for migration between Python libraries. We evaluate three LLMs, LLama 3.1, GPT-4o mini, and GPT-4o on PyMigBench, where we migrate 321 real-world library migrations that include 2,989 migration-related code changes. We measure the correctness of the migration results in two ways. We first compare the LLM's migrated code with the developers' migrated code in the benchmark and then run the unit tests available in the client repositories. We find that LLama 3.1, GPT-4o mini, and GPT-4o correctly migrate 89%, 89%, and 94% of the migration-related code changes. respectively. We also find that 36%, 52% and 64% of the LLama 3.1, GPT-4o mini, and GPT-4o migrations pass the same tests that passed in the developer's migration. Overall, our results suggest that LLMs can be effective in migrating code between libraries, but we also identify cases that pose difficulties for the LLM.