🤖 AI Summary
This work addresses the limited capability of large language models (LLMs) in inferring types for untyped Python codebases. To this end, we introduce the first repository-level benchmark for Python type inference, constructed from 50 high-quality open-source repositories. We propose two novel evaluation metrics: TypeSim, a semantics-aware type similarity measure, and TypeCheck, a cross-file type consistency validator. These metrics systematically assess LLMs’ ability to recover function-level type annotations, recognize complex nested types, and maintain global type consistency across files. Experimental results reveal that while current LLMs perform reasonably on simple types, they exhibit significant deficiencies in handling deeply nested type structures and ensuring cross-file type alignment. This work advances type inference research from isolated function-level tasks toward holistic, repository-scale consistency, establishing a new evaluation paradigm and identifying concrete directions for future model development and improvement.
📝 Abstract
Type inference for dynamic languages like Python is a persistent challenge in software engineering. While large language models (LLMs) have shown promise in code understanding, their type inference capabilities remain underexplored. We introduce TypyBench, a benchmark designed to evaluate LLMs' type inference across entire Python repositories. TypyBench features two novel metrics: TypeSim, which captures nuanced semantic relationships between predicted and ground truth types, and TypeCheck, which assesses type consistency across codebases. Our evaluation of various LLMs on a curated dataset of 50 high-quality Python repositories reveals that, although LLMs achieve decent TypeSim scores, they struggle with complex nested types and exhibit significant type consistency errors. These findings suggest that future research should shift focus from improving type similarity to addressing repository-level consistency. TypyBench provides a foundation for this new direction, offering insights into model performance across different type complexities and usage contexts. Our code and data are available at https://github.com/typybench/typybench.