TimeMachine-bench: A Benchmark for Evaluating Model Capabilities in Repository-Level Migration Tasks

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a systematic, real-world evaluation benchmark for software migration tasks, particularly in scenarios where dependency updates cause code breakage. We propose the first project-level Python migration benchmark that supports automated construction and continuous updates, dynamically collecting real-world failure cases induced by dependency upgrades through mining GitHub repository histories, detecting dependency changes, and identifying test failures. To ensure problem solvability and evaluation validity, we further introduce a manually verified subset of these cases. Experimental results demonstrate that although 11 leading large language models exhibit preliminary migration capabilities, they commonly suffer from reliability issues such as generating spurious fixes and performing redundant edits.

Technology Category

Application Category

📝 Abstract
With the advancement of automated software engineering, research focus is increasingly shifting toward practical tasks reflecting the day-to-day work of software engineers. Among these tasks, software migration, a critical process of adapting code to evolving environments, has been largely overlooked. In this study, we introduce TimeMachine-bench, a benchmark designed to evaluate software migration in real-world Python projects. Our benchmark consists of GitHub repositories whose tests begin to fail in response to dependency updates. The construction process is fully automated, enabling live updates of the benchmark. Furthermore, we curated a human-verified subset to ensure problem solvability. We evaluated agent-based baselines built on top of 11 models, including both strong open-weight and state-of-the-art LLMs on this verified subset. Our results indicated that, while LLMs show some promise for migration tasks, they continue to face substantial reliability challenges, including spurious solutions that exploit low test coverage and unnecessary edits stemming from suboptimal tool-use strategies. Our dataset and implementation are available at https://github.com/tohoku-nlp/timemachine-bench.
Problem

Research questions and friction points this paper is trying to address.

software migration
benchmark
repository-level
dependency updates
LLM evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

software migration
benchmark
LLM evaluation
repository-level tasks
automated software engineering
🔎 Similar Papers
No similar papers found.